1
|
Jaquiery M, Yeung N. Preferences for advisor agreement and accuracy. PLoS One 2024; 19:e0311211. [PMID: 39331636 PMCID: PMC11432857 DOI: 10.1371/journal.pone.0311211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 09/16/2024] [Indexed: 09/29/2024] Open
Abstract
Previous research has shown that people are more influenced by advisors who are objectively more accurate, but also by advisors who tend to agree with their own initial opinions. The present experiments extend these ideas to consider people's choices of who they receive advice from-the process of source selection. Across a series of nine experiments, participants were first exposed to advisors who differed in objective accuracy, the likelihood of agreeing with the participants' judgments, or both, and then were given choice over who would advise them across a series of decisions. Participants saw these advisors in the context of perceptual decision and general knowledge tasks, sometimes with feedback provided and sometimes without. We found evidence that people can discern accurate from inaccurate advice even in the absence of feedback, but that without feedback they are biased to select advisors who tend to agree with them. When choosing between advisors who are accurate vs. likely to agree with them, participants overwhelmingly choose accurate advisors when feedback is available, but show wide individual differences in preference when feedback is absent. These findings extend previous studies of advice influence to characterise patterns of advisor choice, with implications for how people select information sources and learn accordingly.
Collapse
Affiliation(s)
- Matt Jaquiery
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Nick Yeung
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
2
|
Leon T, Weidemann G, Kneebone II, Bailey PE. Cognitive and Emotional Factors Influencing the Incorporation of Advice Into Decision Making Across the Adult Lifespan. J Gerontol B Psychol Sci Soc Sci 2024; 79:gbae080. [PMID: 38738919 PMCID: PMC11212316 DOI: 10.1093/geronb/gbae080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Indexed: 05/14/2024] Open
Abstract
OBJECTIVES The present study sought to investigate the influence of advice on decision making in older age, as well as the potential influence of depressive symptoms and age-related differences in the cognitively demanding emotion regulation on advice-taking. METHOD A nonclinical sample (N = 156; 50% female; 47 young: M age = 29.87, standard deviation [SD] = 5.58; 54 middle-aged: M age = 50.91, SD = 7.13; 55 older: M age = 72.51, SD = 5.33) completed a judge-advisor task to measure degree of advice-taking, as well as measures of fluid intelligence, depressive symptoms, confidence, perceived advice accuracy, and emotion regulation. RESULTS Age did not influence degree of advice-taking. Greater depressive symptoms were associated with more reliance on advice, but only among individuals who identified as emotion regulators. Interestingly, older age was associated with perceiving advice to be less accurate. DISCUSSION The study contributes to the sparse literature on advice-taking in older age. Cognitive and emotional factors influence the degree to which advice is incorporated into decision making in consistent ways across the adult lifespan. A key difference is that older adults take as much advice as younger adults despite perceiving the advice to be less accurate.
Collapse
Affiliation(s)
- Tarren Leon
- Graduate School of Health, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Gabrielle Weidemann
- School of Psychology, Western Sydney University, Sydney, New South Wales, Australia
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Sydney, New South Wales, Australia
| | - Ian I Kneebone
- Graduate School of Health, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Phoebe E Bailey
- Graduate School of Health, University of Technology Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
3
|
Bruun A, White N, Oostendorp L, Vickerstaff V, Harris AJL, Tomlinson C, Bloch S, Stone P. An online randomised controlled trial of prognosticating imminent death in advanced cancer patients: Clinicians give greater weight to advice from a prognostic algorithm than from another clinician with a different profession. Cancer Med 2022; 12:7519-7528. [PMID: 36444695 PMCID: PMC10067032 DOI: 10.1002/cam4.5485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 11/07/2022] [Accepted: 11/17/2022] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND A second opinion or a prognostic algorithm may increase prognostic accuracy. This study assessed the level to which clinicians integrate advice perceived to be coming from another clinician or a prognostic algorithm into their prognostic estimates, and how participant characteristics and nature of advice received affect this. METHODS An online double-blind randomised controlled trial was conducted. Palliative doctors, nurses and other types of healthcare professionals were randomised into study arms differing by perceived source of advice (algorithm or another clinician). In fact, the advice was the same in both arms (emanating from the PiPS-B14 prognostic model). Each participant reviewed five patient summaries. For each summary, participants: (1) provided an initial probability estimate of two-week survival (0% 'certain death'-100% 'certain survival'); (2) received advice (another estimate); (3) provided a final estimate. Weight of Advice (WOA) was calculated for each summary (0 '100% advice discounting' - 1 '0% discounting') and multilevel linear regression analyses were conducted. CLINICAL TRIAL REGISTRATION NUMBER NCT04568629. RESULTS A total of 283 clinicians were included in the analysis. Clinicians integrated advice from the algorithm more than advice from another clinician (WOA difference = -0.12 [95% CI -0.18, -0.07], p < 0.001). There was no interaction between study arm and participant profession, years of palliative care or overall experience. Advice of intermediate strength (75%) was given a lower WOA (0.31) than advice received at either the 50% (WOA 0.40) or 90% level (WOA 0.43). The overall interaction between strength of advice and study arm on WOA was significant (p < 0.001). CONCLUSION Clinicians adjusted their prognostic estimates more when advice was perceived to come from a prognostic algorithm than from another clinician. Research is needed to understand how clinicians make prognostic decisions and how algorithms are used in clinical practice.
Collapse
Affiliation(s)
- Andrea Bruun
- Division of Psychiatry, Marie Curie Palliative Care Research Department, University College London, London, United Kingdom
| | - Nicola White
- Division of Psychiatry, Marie Curie Palliative Care Research Department, University College London, London, United Kingdom
| | - Linda Oostendorp
- Division of Psychiatry, Marie Curie Palliative Care Research Department, University College London, London, United Kingdom
| | - Victoria Vickerstaff
- Division of Psychiatry, Marie Curie Palliative Care Research Department, University College London, London, United Kingdom.,The Research Department of Primary Care and Population Health, Institute of Epidemiology and Health Care, University College London, London, United Kingdom
| | - Adam J L Harris
- Division of Psychology and Language Sciences, Department of Experimental Psychology, University College London, London, United Kingdom
| | - Christopher Tomlinson
- Department of Metabolism, Digestion and Reproduction, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Steven Bloch
- Division of Psychology and Language Sciences, Department of Language and Cognition, University College London, London, United Kingdom
| | - Patrick Stone
- Division of Psychiatry, Marie Curie Palliative Care Research Department, University College London, London, United Kingdom
| |
Collapse
|
4
|
Bailey PE, Leon T, Ebner NC, Moustafa AA, Weidemann G. A meta-analysis of the weight of advice in decision-making. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03573-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
AbstractThe degree to which people take advice, and the factors that influence advice-taking, are of broad interest to laypersons, professionals, and policy-makers. This meta-analysis on 346 effect sizes from 129 independent datasets (N = 17, 296) assessed the weight of advice in the judge-advisor system paradigm, as well as the influence of sample and task characteristics. Information about the advisor(s) that is suggestive of advice quality was the only unique predictor of the overall pooled weight of advice. Individuals adjusted estimates by 32%, 37%, and 48% in response to advisors described in ways that suggest low, neutral, or high quality advice, respectively. This indicates that the benefits of compromise and averaging may be lost if accurate advice is perceived to be low quality, or too much weight is given to inaccurate advice that is perceived to be high quality. When examining the three levels of perceived quality separately, advice-taking was greater for subjective and uncertain estimates, relative to objective estimates, when information about the advisor was neutral in terms of advice quality. Sample characteristics had no effect on advice-taking, thus providing no evidence that age, gender, or individualism influence the weight of advice. The findings contribute to current theoretical debates and provide direction for future research.
Collapse
|
5
|
Himmelstein M, Budescu DV. Preference for human or algorithmic forecasting advice does not predict if and how it is used. JOURNAL OF BEHAVIORAL DECISION MAKING 2022. [DOI: 10.1002/bdm.2285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
6
|
Morin O, Jacquet PO, Vaesen K, Acerbi A. Social information use and social information waste. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200052. [PMID: 33993762 PMCID: PMC8126467 DOI: 10.1098/rstb.2020.0052] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 12/09/2020] [Indexed: 12/12/2022] Open
Abstract
Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental literature use social information quite inefficiently: they do not take it sufficiently into account. A comprehensive review of the literature on five experimental tasks documented 45 studies showing social information waste, and four studies showing social information being over-used. These studies cover 'egocentric discounting' phenomena as studied by social psychology, but also include experimental social learning studies. Social information waste means that human adults fail to give social information its optimal weight. Both proximal explanations and accounts derived from evolutionary theory leave crucial aspects of the phenomenon unaccounted for: egocentric discounting is a pervasive effect that no single unifying explanation fully captures. Cultural evolutionary theory's insistence on the power and benefits of social influence is to be balanced against this phenomenon. This article is part of the theme issue 'Foundations of cultural evolution'.
Collapse
Affiliation(s)
- Olivier Morin
- Institut Jean Nicod, DEC, ENS, EHESS, CNRS, PSL University, UMR 8129, Paris, France
- Minds and Traditions Research Group, Max Planck Institute for the Science of Human History, Jena, Germany
| | - Pierre Olivier Jacquet
- Laboratoire de Neurosciences Cognitives et Computationnelles (LNC2), Département d'Etudes Cognitives, INSERM, Ecole Normale Supérieure, PSL Research University, Paris, France
| | - Krist Vaesen
- School of Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Alberto Acerbi
- Centre for Culture and Evolution, Brunel University London, London, UK
| |
Collapse
|
7
|
Abstract
Crowdsourcing human forecasts and machine learning models each show promise in predicting future geopolitical outcomes. Crowdsourcing increases accuracy by pooling knowledge, which mitigates individual errors. On the other hand, advances in machine learning have led to machine models that increase accuracy due to their ability to parameterize and adapt to changing environments. To capitalize on the unique advantages of each method, recent efforts have shown improvements by “hybridizing” forecasts—pairing human forecasters with machine models. This study analyzes the effectiveness of such a hybrid system. In a perfect world, independent reasoning by the forecasters combined with the analytic capabilities of the machine models should complement each other to arrive at an ultimately more accurate forecast. However, well-documented biases describe how humans often mistrust and under-utilize such models in their forecasts. In this work, we present a model that can be used to estimate the trust that humans assign to a machine. We use forecasts made in the absence of machine models as prior beliefs to quantify the weights placed on the models. Our model can be used to uncover other aspects of forecasters’ decision-making processes. We find that forecasters trust the model rarely, in a pattern that suggests they treat machine models similarly to expert advisors, but only the best forecasters trust the models when they can be expected to perform well. We also find that forecasters tend to choose models that conform to their prior beliefs as opposed to anchoring on the model forecast. Our results suggest machine models can improve the judgment of a human pool but highlight the importance of accounting for trust and cognitive biases involved in the human judgment process.
Collapse
|