1
|
Wintle BC, Smith ET, Bush M, Mody F, Wilkinson DP, Hanea AM, Marcoci A, Fraser H, Hemming V, Thorn FS, McBride MF, Gould E, Head A, Hamilton DG, Kambouris S, Rumpff L, Hoekstra R, Burgman MA, Fidler F. Predicting and reasoning about replicability using structured groups. ROYAL SOCIETY OPEN SCIENCE 2023; 10:221553. [PMID: 37293358 PMCID: PMC10245209 DOI: 10.1098/rsos.221553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 04/14/2023] [Indexed: 06/10/2023]
Abstract
This paper explores judgements about the replicability of social and behavioural sciences research and what drives those judgements. Using a mixed methods approach, it draws on qualitative and quantitative data elicited from groups using a structured approach called the IDEA protocol ('investigate', 'discuss', 'estimate' and 'aggregate'). Five groups of five people with relevant domain expertise evaluated 25 research claims that were subject to at least one replication study. Participants assessed the probability that each of the 25 research claims would replicate (i.e. that a replication study would find a statistically significant result in the same direction as the original study) and described the reasoning behind those judgements. We quantitatively analysed possible correlates of predictive accuracy, including self-rated expertise and updating of judgements after feedback and discussion. We qualitatively analysed the reasoning data to explore the cues, heuristics and patterns of reasoning used by participants. Participants achieved 84% classification accuracy in predicting replicability. Those who engaged in a greater breadth of reasoning provided more accurate replicability judgements. Some reasons were more commonly invoked by more accurate participants, such as 'effect size' and 'reputation' (e.g. of the field of research). There was also some evidence of a relationship between statistical literacy and accuracy.
Collapse
Affiliation(s)
- Bonnie C. Wintle
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
| | - Eden T. Smith
- MetaMelb Research Initiative, School of Historical and Philosophical Studies, University of Melbourne, Parkville 3010, Australia
| | - Martin Bush
- MetaMelb Research Initiative, School of Historical and Philosophical Studies, University of Melbourne, Parkville 3010, Australia
| | - Fallon Mody
- MetaMelb Research Initiative, School of Historical and Philosophical Studies, University of Melbourne, Parkville 3010, Australia
| | - David P. Wilkinson
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
| | - Anca M. Hanea
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
- Centre of Excellence for Biosecurity Risk Analysis, School of BioSciences, University of Melbourne, Parkville 3010, Australia
| | - Alexandru Marcoci
- Centre for the Study of Existential Risk, University of Cambridge, Cambridge, UK
| | - Hannah Fraser
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
| | - Victoria Hemming
- Martin Conservation Decisions Lab, Department of Forest and Conservation Sciences, University of British Columbia, Vancouver, Canada
| | - Felix Singleton Thorn
- School of Psychological Sciences, University of Melbourne, Parkville 3010, Australia
| | - Marissa F. McBride
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
- Centre for Environmental Policy, Imperial College London, London, UK
| | - Elliot Gould
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
| | - Andrew Head
- MetaMelb Research Initiative, School of Historical and Philosophical Studies, University of Melbourne, Parkville 3010, Australia
| | - Daniel G. Hamilton
- MetaMelb Research Initiative, School of Historical and Philosophical Studies, University of Melbourne, Parkville 3010, Australia
| | - Steven Kambouris
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
| | - Libby Rumpff
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
| | - Rink Hoekstra
- Department of Pedagogical and Educational Sciences, University of Groningen, Groningen, The Netherlands
| | - Mark A. Burgman
- Centre for Environmental Policy, Imperial College London, London, UK
| | - Fiona Fidler
- MetaMelb Research Initiative, School of Ecosystem and Forest Sciences, University of Melbourne, Parkville 3010, Australia
- MetaMelb Research Initiative, School of Historical and Philosophical Studies, University of Melbourne, Parkville 3010, Australia
| |
Collapse
|
2
|
Hanea AM, Wilkinson DP, McBride M, Lyon A, van Ravenzwaaij D, Singleton Thorn F, Gray C, Mandel DR, Willcox A, Gould E, Smith ET, Mody F, Bush M, Fidler F, Fraser H, Wintle BC. Mathematically aggregating experts' predictions of possible futures. PLoS One 2021; 16:e0256919. [PMID: 34473784 PMCID: PMC8412308 DOI: 10.1371/journal.pone.0256919] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 08/18/2021] [Indexed: 12/05/2022] Open
Abstract
Structured protocols offer a transparent and systematic way to elicit and combine/aggregate, probabilistic predictions from multiple experts. These judgements can be aggregated behaviourally or mathematically to derive a final group prediction. Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. The quality of this aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the "best" final prediction. When experts' performance can be scored on similar questions ahead of time, these scores can be translated into performance-based weights, and a performance-based weighted aggregation can then be used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. Here, we develop a suite of aggregation methods, informed by previous experience and the available literature. We differentially weight our experts' estimates by measures of reasoning, engagement, openness to changing their mind, informativeness, prior knowledge, and extremity, asymmetry or granularity of estimates. Next, we investigate the relative performance of these aggregation methods using three datasets. The main goal of this research is to explore how measures of knowledge and behaviour of individuals can be leveraged to produce a better performing combined group judgment. Although the accuracy, calibration, and informativeness of the majority of methods are very similar, a couple of the aggregation methods consistently distinguish themselves as among the best or worst. Moreover, the majority of methods outperform the usual benchmarks provided by the simple average or the median of estimates.
Collapse
Affiliation(s)
- A M Hanea
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - D P Wilkinson
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - M McBride
- Centre for Environmental Policy, Imperial College London, London, United Kingdom
| | - A Lyon
- DelphiCloud, Amsterdam, The Netherlands
| | - D van Ravenzwaaij
- Faculty of Behavioural and Social Sciences, University of Groningen, Groningen, The Netherlands
| | - F Singleton Thorn
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - C Gray
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - D R Mandel
- Cognimotive Consulting Inc., Toronto, Ontario, Canada
| | - A Willcox
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - E Gould
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - E T Smith
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - F Mody
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - M Bush
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - F Fidler
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - H Fraser
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| | - B C Wintle
- MetaMelb Lab, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
3
|
Harada T. Three heads are better than two: Comparing learning properties and performances across individuals, dyads, and triads through a computational approach. PLoS One 2021; 16:e0252122. [PMID: 34138907 PMCID: PMC8211165 DOI: 10.1371/journal.pone.0252122] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 05/10/2021] [Indexed: 11/28/2022] Open
Abstract
Although it is considered that two heads are better than one, related studies argued that groups rarely outperform their best members. This study examined not only whether two heads are better than one but also whether three heads are better than two or one in the context of two-armed bandit problems where learning plays an instrumental role in achieving high performance. This research revealed that a U-shaped correlation exists between performance and group size. The performance was highest for either individuals or triads, but the lowest for dyads. Moreover, this study estimated learning properties and determined that high inverse temperature (exploitation) accounted for high performance. In particular, it was shown that group effects regarding the inverse temperatures in dyads did not generate higher values to surpass the averages of their two group members. In contrast, triads gave rise to higher values of the inverse temperatures than their averages of their individual group members. These results were consistent with our proposed hypothesis that learning coherence is likely to emerge in individuals and triads, but not in dyads, which in turn leads to higher performance. This hypothesis is based on the classical argument by Simmel stating that while dyads are likely to involve more emotion and generate greater variability, triads are the smallest structure which tends to constrain emotions, reduce individuality, and generate behavioral convergences or uniformity because of the ''two against one" social pressures. As a result, three heads or one head were better than two in our study.
Collapse
Affiliation(s)
- Tsutomu Harada
- Graduate School of Business Administration, Kobe University, Kobe, Japan
| |
Collapse
|
4
|
Liang F, Verhoeven K, Brunelli M, Rezaei J. Inland terminal location selection using the multi-stakeholder best-worst method. INTERNATIONAL JOURNAL OF LOGISTICS-RESEARCH AND APPLICATIONS 2021. [DOI: 10.1080/13675567.2021.1885634] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Fuqi Liang
- Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| | - Kyle Verhoeven
- Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| | - Matteo Brunelli
- Department of Industrial Engineering, University of Trento, Trento, Italy
| | - Jafar Rezaei
- Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
5
|
Molleman L, Tump AN, Gradassi A, Herzog S, Jayles B, Kurvers RHJM, van den Bos W. Strategies for integrating disparate social information. Proc Biol Sci 2020; 287:20202413. [PMID: 33234085 PMCID: PMC7739494 DOI: 10.1098/rspb.2020.2413] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 10/30/2020] [Indexed: 01/17/2023] Open
Abstract
Social information use is widespread in the animal kingdom, helping individuals rapidly acquire useful knowledge and adjust to novel circumstances. In humans, the highly interconnected world provides ample opportunities to benefit from social information but also requires navigating complex social environments with people holding disparate or conflicting views. It is, however, still largely unclear how people integrate information from multiple social sources that (dis)agree with them, and among each other. We address this issue in three steps. First, we present a judgement task in which participants could adjust their judgements after observing the judgements of three peers. We experimentally varied the distribution of this social information, systematically manipulating its variance (extent of agreement among peers) and its skewness (peer judgements clustering either near or far from the participant's judgement). As expected, higher variance among peers reduced their impact on behaviour. Importantly, observing a single peer confirming a participant's own judgement markedly decreased the influence of other-more distant-peers. Second, we develop a framework for modelling the cognitive processes underlying the integration of disparate social information, combining Bayesian updating with simple heuristics. Our model accurately accounts for observed adjustment strategies and reveals that people particularly heed social information that confirms personal judgements. Moreover, the model exposes strong inter-individual differences in strategy use. Third, using simulations, we explore the possible implications of the observed strategies for belief updating. These simulations show how confirmation-based weighting can hamper the influence of disparate social information, exacerbate filter bubble effects and deepen group polarization. Overall, our results clarify what aspects of the social environment are, and are not, conducive to changing people's minds.
Collapse
Affiliation(s)
- Lucas Molleman
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Alan N. Tump
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Andrea Gradassi
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Stefan Herzog
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Bertrand Jayles
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Ralf H. J. M. Kurvers
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Wouter van den Bos
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition Center, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
6
|
Utility and use of accuracy cues in social learning of crowd preferences. PLoS One 2020; 15:e0240997. [PMID: 33112896 PMCID: PMC7592789 DOI: 10.1371/journal.pone.0240997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 10/06/2020] [Indexed: 11/19/2022] Open
Abstract
Despite limited information and knowledge, we personally form beliefs about certain properties of objects encountered in our daily life—popularity of a newly released movie, for example. Since such beliefs are prone to error, we often revise our initial beliefs according to the beliefs of others to improve accuracy. Optimal revision requires modulating the degree of accepting others’ beliefs based on various cues for accuracy—number of opinions, for example—such that the more accurate others’ beliefs are, the more we accept them. Although previous studies have shown that such accuracy cues can influence the degree of acceptance during social revision, they primarily investigated problems with ‘factually correct’ answers, and rarely problems with ‘socially correct’ answers. Here we examined which accuracy cues are objectively useful (utility of cues), and how those cues are used (use of cues), in the social revision of people’s beliefs about problems with ‘socially correct’ answers. We asked people to estimate the ‘shared preferences (SPs)’ for sociocultural items, the answers to which are determined by socially aggregated beliefs—how popular an abstract painting will be among a large crowd, for example—and then to revise their initial estimates after being exposed to other people’s estimates about the same items. We considered ‘confidence’, ‘agreement among estimates’, and ‘number of estimates’ as accuracy cues. We found that, while all three cues validly signaled the accuracy of SP estimates, only the ‘number’ cue has a significant utility, but the other cues are much less useful for optimal revision. Nevertheless, people used the cues of ‘agreement’ and their own 'confidence’ to the extent comparable to that of the 'number' cue. Our findings suggest that the utility and use of accuracy cues for problems with ‘socially correct’ answers differ from those with ‘factually correct’ answers, as follows: (i) confidence does not have a significant utility and (ii) but people use their own confidence while ignoring others’ confidence.
Collapse
|
7
|
Blanchard MD, Jackson SA, Kleitman S. Collective decision making reduces metacognitive control and increases error rates, particularly for overconfident individuals. JOURNAL OF BEHAVIORAL DECISION MAKING 2020. [DOI: 10.1002/bdm.2156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
| | - Simon A. Jackson
- School of PsychologyThe University of Sydney Sydney NSW Australia
| | - Sabina Kleitman
- School of PsychologyThe University of Sydney Sydney NSW Australia
| |
Collapse
|
8
|
Nguyen VD, Truong HB, Merayo MG, Nguyen NT. Toward evaluating the level of crowd wisdom using interval estimates. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-179338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Van Du Nguyen
- Division of Knowledge and System Engineering for ICT, Ton Duc Thang University, Ho Chi Minh City, Vietnam
- Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam
| | - Hai Bang Truong
- Faculty of Computer Science, University of Information Technology, Vietnam National University Ho Chi Minh City (VNU-HCM), Vietnam
| | - Mercedes G. Merayo
- Department Sistemas Informáticos y Computación, Universidad Complutense de Madrid, Spain
| | - Ngoc Thanh Nguyen
- Faculty of Computer Science and Management, Wroclaw University of Science and Technology, Poland
- Faculty of Information Technology, Nguyen Tat Thanh University, Ho Chi Minh city, Vietnam
| |
Collapse
|
9
|
Jordan SL, Ferris GR, Lamont BT. A framework for understanding the effects of past experiences on justice expectations and perceptions of human resource inclusion practices. HUMAN RESOURCE MANAGEMENT REVIEW 2019. [DOI: 10.1016/j.hrmr.2018.07.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Mercier H, Morin O. Majority rules: how good are we at aggregating convergent opinions? EVOLUTIONARY HUMAN SCIENCES 2019; 1:e6. [PMID: 37588400 PMCID: PMC10427311 DOI: 10.1017/ehs.2019.6] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Mathematical models and simulations demonstrate the power of majority rules, i.e. following an opinion shared by a majority of group members. Majority opinion should be followed more when (a) the relative and absolute size of the majority grow, the members of the majority are (b) competent, and (c) benevolent, (d) the majority opinion conflicts less with our prior beliefs and (e) the members of the majority formed their opinions independently. We review the experimental literature bearing on these points. The few experiments bearing on (b) and (c) suggest that both factors are adequately taken into account. Many experiments show that (d) is also followed, with participants usually putting too much weight on their own opinion relative to that of the majority. Regarding factors (a) and (e), in contrast, the evidence is mixed: participants sometimes take into account optimally the absolute and relative size of the majority, as well as the presence of informational dependencies. In other circumstances, these factors are ignored. We suggest that an evolutionary framework can help make sense of these conflicting results by distinguishing between evolutionarily valid cues - that are readily taken into account - and non-evolutionarily valid cues - that are ignored by default.
Collapse
Affiliation(s)
- Hugo Mercier
- Institut Jean Nicod, PSL University, CNRS, ParisFrance
| | - Olivier Morin
- Max Planck institute for the Science of Human History, Jena, Germany
| |
Collapse
|
11
|
Zhao WJ, Davis‐Stober CP, Bhatia S. Optimal cue aggregation in the absence of criterion knowledge. JOURNAL OF BEHAVIORAL DECISION MAKING 2019. [DOI: 10.1002/bdm.2123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Wenjia Joyce Zhao
- Department of PsychologyUniversity of Pennsylvania Philadelphia Pennsylvania
| | | | - Sudeep Bhatia
- Department of PsychologyUniversity of Pennsylvania Philadelphia Pennsylvania
| |
Collapse
|
12
|
Eliciting improved quantitative judgements using the IDEA protocol: A case study in natural resource management. PLoS One 2018; 13:e0198468. [PMID: 29933407 PMCID: PMC6014637 DOI: 10.1371/journal.pone.0198468] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 05/18/2018] [Indexed: 11/23/2022] Open
Abstract
Introduction Natural resource management uses expert judgement to estimate facts that inform important decisions. Unfortunately, expert judgement is often derived by informal and largely untested protocols, despite evidence that the quality of judgements can be improved with structured approaches. We attribute the lack of uptake of structured protocols to the dearth of illustrative examples that demonstrate how they can be applied within pressing time and resource constraints, while also improving judgements. Aims and methods In this paper, we demonstrate how the IDEA protocol for structured expert elicitation may be deployed to overcome operational challenges while improving the quality of judgements. The protocol was applied to the estimation of 14 future abiotic and biotic events on the Great Barrier Reef, Australia. Seventy-six participants with varying levels of expertise related to the Great Barrier Reef were recruited and allocated randomly to eight groups. Each participant provided their judgements using the four-step question format of the IDEA protocol (‘Investigate’, ‘Discuss’, ‘Estimate’, ‘Aggregate’) through remote elicitation. When the events were realised, the participant judgements were scored in terms of accuracy, calibration and informativeness. Results and conclusions The results demonstrate that the IDEA protocol provides a practical, cost-effective, and repeatable approach to the elicitation of quantitative estimates and uncertainty via remote elicitation. We emphasise that i) the aggregation of diverse individual judgements into pooled group judgments almost always outperformed individuals, and ii) use of a modified Delphi approach helped to remove linguistic ambiguity, and further improved individual and group judgements. Importantly, the protocol encourages review, critical appraisal and replication, each of which is required if judgements are to be used in place of data in a scientific context. The results add to the growing body of literature that demonstrates the merit of using structured elicitation protocols. We urge decision-makers and analysts to use insights and examples to improve the evidence base of expert judgement in natural resource management.
Collapse
|
13
|
Abstract
This study examined whether taking advice is influenced by regulatory fit and whether this effect is reduced or disappears within certain attribution conditions during vocational decision making. Experiment 1 created a vocational decision setting to compare differences in decision makers’ weight of advice (WOA) between ‘eager strategy’ and ‘vigilant strategy’ advice conditions. Results showed no significant main effect of regulatory orientation or advice strategy, but there was a significant interaction. The WOA value, with fit between regulatory focus and advice strategy, was higher than with a fit violation. Experiment 2 examined whether the regulatory fit effect is reduced or disappears within attribution conditions during vocational decision making. Results showed job seekers more easily take others’ advice under the fit condition, and a significant interaction existed between regulatory fit and attribution. Thus, attribution could reduce the influence of the regulatory fit effect. Implications for vocational consultants, job seekers, and advisors are also discussed.
Collapse
|
14
|
Affiliation(s)
| | | | - Robert L. Winkler
- Fuqua School of Business, Duke University, Durham, North Carolina 27708
| |
Collapse
|
15
|
|
16
|
Abstract
In daily decision making, people often solicit one another's opinions in the hope of improving their own judgment. According to both theory and empirical results, integrating even a few opinions is beneficial, with the accuracy gains diminishing as the bias of the judges or the correlation between their opinions increases. Decision makers using intuitive policies for integrating others' opinions rely on a variety of accuracy cues in weighting the opinions they receive. They tend to discount dissenters and to give greater weight to their own opinion than to other people's opinions.
Collapse
Affiliation(s)
- Ilan Yaniv
- Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
17
|
Granovskiy B, Gold JM, Sumpter DJT, Goldstone RL. Integration of Social Information by Human Groups. Top Cogn Sci 2015; 7:469-93. [PMID: 26189568 DOI: 10.1111/tops.12150] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2012] [Revised: 07/22/2014] [Accepted: 10/28/2014] [Indexed: 11/30/2022]
Abstract
We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy, but with information about the decisions made by peers in their group. The "wisdom of crowds" hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0% and 100% (e.g., "What percentage of Americans are left-handed?"). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move toward the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased.
Collapse
Affiliation(s)
- Boris Granovskiy
- Department of Mathematics, Uppsala University, Institute for Futures Studies, Stockholm
| | - Jason M Gold
- Department of Psychological and Brain Sciences, Indiana University
| | - David J T Sumpter
- Department of Mathematics, Uppsala University, Institute for Futures Studies, Stockholm
| | | |
Collapse
|
18
|
The effectiveness of imperfect weighting in advice taking. JUDGMENT AND DECISION MAKING 2015. [DOI: 10.1017/s1930297500004666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractWe investigate decision-making in the Judge-Advisor-System where one person, the “judge”, wants to estimate the number of a certain entity and is given advice by another person. The question is how to combine the judge’s initial estimate and that of the advisor in order to get the optimal expected outcome. A previous approach compared two frequently applied strategies, taking the average or choosing the better estimate. In most situations, averaging produced the better estimates. However, this approach neglected a third strategy that judges frequently use, namely a weighted mean of the judges’ initial estimate and the advice. We compare the performance of averaging and choosing to weighting in a theoretical analysis. If the judge can, without error, detect ability differences between judge and advisor, a straight-forward calculation shows that weighting outperforms both of these strategies. More interestingly, after introducing errors in the perception of the ability differences, we show that such imperfect weighting may or may not be the optimal strategy. The relative performance of imperfect weighting compared to averaging or choosing depends on the size of the actual ability differences as well as the magnitude of the error. However, for a sizeable range of ability differences and errors, weighting is preferable to averaging and more so to choosing. Our analysis expands previous research by showing that weighting, even when imperfect, is an appropriate advice taking strategy and under which circumstances judges benefit most from applying it.
Collapse
|
19
|
Abstract
BACKGROUND Confirmation bias is the tendency to acquire or evaluate new information in a way that is consistent with one's preexisting beliefs. It is omnipresent in psychology, economics, and even scientific practices. Prior theoretical research of this phenomenon has mainly focused on its economic implications possibly missing its potential connections with broader notions of cognitive science. METHODOLOGY/PRINCIPAL FINDINGS We formulate a (non-Bayesian) model for revising subjective probabilistic opinion of a confirmationally-biased agent in the light of a persuasive opinion. The revision rule ensures that the agent does not react to persuasion that is either far from his current opinion or coincides with it. We demonstrate that the model accounts for the basic phenomenology of the social judgment theory, and allows to study various phenomena such as cognitive dissonance and boomerang effect. The model also displays the order of presentation effect-when consecutively exposed to two opinions, the preference is given to the last opinion (recency) or the first opinion (primacy) -and relates recency to confirmation bias. Finally, we study the model in the case of repeated persuasion and analyze its convergence properties. CONCLUSIONS The standard Bayesian approach to probabilistic opinion revision is inadequate for describing the observed phenomenology of persuasion process. The simple non-Bayesian model proposed here does agree with this phenomenology and is capable of reproducing a spectrum of effects observed in psychology: primacy-recency phenomenon, boomerang effect and cognitive dissonance. We point out several limitations of the model that should motivate its future development.
Collapse
Affiliation(s)
| | - Aram Galstyan
- USC Information Sciences Institute, Marina del Rey, California, United States of America
| |
Collapse
|
20
|
Dowd KW, Petrocelli JV, Wood MT. Integrating information from multiple sources: A metacognitive account of self-generated and externally provided anchors. THINKING & REASONING 2013. [DOI: 10.1080/13546783.2013.811442] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
21
|
Sah S, Moore DA, MacCoun RJ. Cheap talk and credibility: The consequences of confidence and accuracy on advisor credibility and persuasiveness. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2013. [DOI: 10.1016/j.obhdp.2013.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
22
|
McBride MF, Fidler F, Burgman MA. Evaluating the accuracy and calibration of expert predictions under uncertainty: predicting the outcomes of ecological research. DIVERS DISTRIB 2012. [DOI: 10.1111/j.1472-4642.2012.00884.x] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
23
|
Mercier H, Kawasaki Y, Yama H, Adachi K, Van der Henst JB. Is the Use of Averaging in Advice Taking Modulated by Culture? JOURNAL OF COGNITION AND CULTURE 2012. [DOI: 10.1163/156853712x633893] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
24
|
Önkal D, Goodwin P, Thomson M, Gönül S, Pollock A. The relative influence of advice from human experts and statistical methods on forecast adjustments. JOURNAL OF BEHAVIORAL DECISION MAKING 2009. [DOI: 10.1002/bdm.637] [Citation(s) in RCA: 122] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
25
|
Bonaccio S, Dalal RS. Evaluating advisors: A policy-capturing study under conditions of complete and missing information. JOURNAL OF BEHAVIORAL DECISION MAKING 2009. [DOI: 10.1002/bdm.649] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
26
|
Yaniv I, Milyavsky M. Using advice from multiple sources to revise and improve judgments. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2007. [DOI: 10.1016/j.obhdp.2006.05.006] [Citation(s) in RCA: 86] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
27
|
Bonaccio S, Dalal RS. Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2006. [DOI: 10.1016/j.obhdp.2006.07.001] [Citation(s) in RCA: 498] [Impact Index Per Article: 27.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
28
|
Budescu DV, Yu HT. To Bayes or Not to Bayes? A Comparison of Two Classes of Models of Information Aggregation. DECISION ANALYSIS 2006. [DOI: 10.1287/deca.1060.0074] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
29
|
Gino F, Moore DA. Effects of task difficulty on use of advice. JOURNAL OF BEHAVIORAL DECISION MAKING 2006. [DOI: 10.1002/bdm.539] [Citation(s) in RCA: 136] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
30
|
Zhang J, Hsee CK, Xiao Z. The majority rule in individual decision making. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2006. [DOI: 10.1016/j.obhdp.2005.06.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
31
|
Humphrey SE, Hollenbeck JR, Meyer CJ, Ilgen DR. Hierarchical team decision making. RESEARCH IN PERSONNEL AND HUMAN RESOURCES MANAGEMENT 2004. [DOI: 10.1016/s0742-7301(02)21004-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
32
|
Yaniv I. Receiving other people’s advice: Influence and benefit. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2004. [DOI: 10.1016/j.obhdp.2003.08.002] [Citation(s) in RCA: 354] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
33
|
Harries C, Yaniv I, Harvey N. Combining advice: the weight of a dissenting opinion in the consensus. JOURNAL OF BEHAVIORAL DECISION MAKING 2004. [DOI: 10.1002/bdm.474] [Citation(s) in RCA: 58] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
34
|
Budescu DV, Rantilla AK, Yu HT, Karelitz TM. The effects of asymmetry among advisors on the aggregation of their opinions. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2003. [DOI: 10.1016/s0749-5978(02)00516-2] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
35
|
Yaniv I, Kleinberger E. Advice Taking in Decision Making: Egocentric Discounting and Reputation Formation. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2000; 83:260-281. [PMID: 11056071 DOI: 10.1006/obhd.2000.2909] [Citation(s) in RCA: 204] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Our framework for understanding advice-taking in decision making rests on two theoretical concepts that motivate the studies and serve to explain the findings. The first is egocentric discounting of others' opinions and the second is reputation formation for advisors. Advice discounting is attributed to differential information, namely, the notion that decision makers have privileged access to their internal reasons for holding their own opinion, but not to the advisors' internal reasons. Reputation formation is related to the negativity effect in impression formation and to the trust asymmetry principle. In three studies we measured decision makers' weighting policy for advice and, in a fourth study, their willingness to pay for it. Briefly, we found that advice is discounted relative to one's own opinion, while advisors' reputations are rapidly formed and asymmetrically revised. The asymmetry implies that it may be easier for advisors to lose a good reputation than to gain one. The cognitive and social origins of these phenomena are considered. Copyright 2000 Academic Press.
Collapse
Affiliation(s)
- I Yaniv
- Hebrew University of Jerusalem, Israel
| | | |
Collapse
|
36
|
Abstract
We investigate the case of a single decision maker (DM) who obtains probabilistic forecasts regarding the occurrence of a unique target event from J distinct, symmetric, and equally diagnostic expert advisors (judges). The paper begins with a mathematical model of DM's aggregation process of expert opinions, in which confidence in the final aggregate is shown to be inversely related to its perceived variance. As such, confidence is expected to vary as a function of factors such as the number of experts, the total number of cues, the fraction of cues available to each expert, the level of inter-expert overlap in information, and the range of experts' opinions. In the second part of the paper, we present results from two experiments that support the main (ordinal) predictions of the model.
Collapse
Affiliation(s)
- D V Budescu
- Department of Psychology, University of Illinois at Urbana-Champaign (UIUC) 61820, USA.
| | | |
Collapse
|