1
|
Abstract
In this article, I critically examine a number of widely held beliefs about the nature of replication and its place in science, with particular reference to psychology. In doing so, I present a number of underappreciated understandings of the nature of science more generally. I contend that some contributors to the replication debates overstate the importance of replication in science and mischaracterize the relationship between direct and conceptual replication. I also claim that there has been a failure to appreciate sufficiently the variety of legitimate replication practices that scientists engage in. In this regard, I highlight the tendency to pay insufficient attention to methodological triangulation as an important strategy for justifying empirical claims. I argue, further, that the replication debates tend to overstate the closeness of the relationship between replication and theory construction. Some features of this relationship are spelt out with reference to the hypothetico-deductive and the abductive accounts of scientific method. Additionally, an evaluation of the status of replication in different characterizations of scientific progress is undertaken. I maintain that viewing replication as just one element of the wide array of scientific endeavors leads to the conclusion that it is not as prominent in science as is often claimed.
Collapse
Affiliation(s)
- Brian D. Haig
- School of Psychology, Speech & Hearing, University of Canterbury, Christchurch, New Zealand
| |
Collapse
|
2
|
Stanton JM. Evaluating Equivalence and Confirming the Null in the Organizational Sciences. ORGANIZATIONAL RESEARCH METHODS 2020. [DOI: 10.1177/1094428120921934] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Testing and rejecting the null hypothesis is a routine part of quantitative research, but relatively few organizational researchers prepare for confirming the null or, similarly, testing a hypothesis of equivalence (e.g., that two group means are practically identical). Both theory and practice could benefit from greater attention to this capability. Planning ahead for equivalence testing also provides helpful input on assuring sufficient statistical power in a study. This article provides background on these ideas plus guidance on the use of two frequentist and two Bayesian techniques for testing a hypothesis of no nontrivial effect. The guidance highlights some faulty strategies and how to avoid them. An organizationally relevant example illustrates how to put these techniques into practice. A simulation compares the four techniques to support recommendations of when and how to use each one. A nine-step process table describes separate analytical tracks for frequentist and Bayesian equivalence techniques.
Collapse
|
3
|
Hassan NR, Mathiassen L, Lowry PB. The process of information systems theorizing as a discursive practice. JOURNAL OF INFORMATION TECHNOLOGY 2019. [DOI: 10.1177/0268396219832004] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although there has been a growing understanding of theory in the information systems field in recent years, the process of theorizing is rarely addressed with contributions originating from other disciplines and little effort to coherently synthesize them. Moreover, the field’s view of theorizing has traditionally focused on the context of justification with an emphasis on collection and analysis of data in response to a research question with theory often added as an afterthought. To fill this void, we foreground the context of discovery that emphasizes the creative and often serendipitous articulation of theory by emphasizing this important stage of theorizing as a reflective and highly iterative practice. Specifically, we suggest that information systems researchers engage in foundational theorizing practices to form the discourse, problematize the phenomenon of interest and leverage paradigms and deploy generative theorizing practices through analogies, metaphors, myths and models to develop the information systems discourse. To illustrate the detailed workings of these discursive practices, we draw on key examples from information systems theorizing.
Collapse
|
4
|
Is the Public Sector a Fairer Employer? Ethnic Employment Discrimination in the Public and Private Sectors. ACADEMY OF MANAGEMENT DISCOVERIES 2018. [DOI: 10.5465/amd.2016.0029] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
5
|
Badenes-Ribera L, Frias-Navarro D, Iotti NO, Bonilla-Campos A, Longobardi C. Perceived Statistical Knowledge Level and Self-Reported Statistical Practice Among Academic Psychologists. Front Psychol 2018; 9:996. [PMID: 29988476 PMCID: PMC6024681 DOI: 10.3389/fpsyg.2018.00996] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 05/28/2018] [Indexed: 11/18/2022] Open
Abstract
Introduction: Publications arguing against the null hypothesis significance testing (NHST) procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES), confidence intervals (CIs), and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study. Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years). The mean number of years in the position of professor was 12.90 (SD = 10.21). Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot) and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices. Conclusions: Changing statistical practice is not easy.This change requires statistical training programs for academics, both graduate and undergraduate.
Collapse
Affiliation(s)
- Laura Badenes-Ribera
- Departament de Metodologia de les Ciències del Comportament, Universitat de València, Valencia, Spain
| | - Dolores Frias-Navarro
- Departament de Metodologia de les Ciències del Comportament, Universitat de València, Valencia, Spain
| | - Nathalie O Iotti
- Dipartimento di Psicologia, Università degli Studi di Torino, Turin, Italy
| | - Amparo Bonilla-Campos
- Departament de Metodologia de les Ciències del Comportament, Universitat de València, Valencia, Spain
| | - Claudio Longobardi
- Dipartimento di Psicologia, Università degli Studi di Torino, Turin, Italy
| |
Collapse
|
6
|
Banks GC, Woznyj HM, Kepes S, Batchelor JH, McDaniel MA. A meta-analytic review of tipping compensation practices: An agency theory perspective. PERSONNEL PSYCHOLOGY 2018. [DOI: 10.1111/peps.12261] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Kepes S, Thomas MA. Assessing the robustness of meta-analytic results in information systems: publication bias and outliers. EUR J INFORM SYST 2018. [DOI: 10.1080/0960085x.2017.1390188] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sven Kepes
- School of Business, Virginia Commonwealth University, Richmond, VA, USA
| | - Manoj A. Thomas
- School of Business, Virginia Commonwealth University, Richmond, VA, USA
| |
Collapse
|
8
|
Benjamin AJ, Kepes S, Bushman BJ. Effects of Weapons on Aggressive Thoughts, Angry Feelings, Hostile Appraisals, and Aggressive Behavior: A Meta-Analytic Review of the Weapons Effect Literature. PERSONALITY AND SOCIAL PSYCHOLOGY REVIEW 2017; 22:347-377. [DOI: 10.1177/1088868317725419] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A landmark 1967 study showed that simply seeing a gun can increase aggression—called the “weapons effect.” Since 1967, many other studies have attempted to replicate and explain the weapons effect. This meta-analysis integrates the findings of weapons effect studies conducted from 1967 to 2017 and uses the General Aggression Model (GAM) to explain the weapons effect. It includes 151 effect-size estimates from 78 independent studies involving 7,668 participants. As predicted by the GAM, our naïve meta-analytic results indicate that the mere presence of weapons increased aggressive thoughts, hostile appraisals, and aggression, suggesting a cognitive route from weapons to aggression. Weapons did not significantly increase angry feelings. Yet, a comprehensive sensitivity analysis indicated that not all naïve mean estimates were robust to the presence of publication bias. In general, these results suggest that the published literature tends to overestimate the weapons effect for some outcomes and moderators.
Collapse
Affiliation(s)
| | - Sven Kepes
- Virginia Commonwealth University, Richmond, USA
| | | |
Collapse
|
9
|
Validity Generalization as a Continuum. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2017. [DOI: 10.1017/iop.2017.46] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Tett, Hundley, and Christiansen (2017) make a compelling case against meta-analyses that focus on mean effect sizes (e.g., rxy and ρ) while largely disregarding the precision of the estimate and true score variance. This is a reasonable point, but meta-analyses that myopically focus on mean effects at the expense of variance are not examples of validity generalization (VG)—they are examples of bad meta-analyses. VG and situational specificity (SS) fall along a continuum, and claims about generalization are confined to the research question and the type of generalization one is seeking (e.g., directional generalization, magnitude generalization). What Tett et al. (2017) successfully debunk is an extreme position along the generalization continuum significantly beyond the tenets of VG that few, if any, in the research community hold. The position they argue against is essentially a fixed-effects assumption, which runs counter to VG. Describing VG in this way is akin to describing SS as a position that completely ignores sampling error and treats every between-sample difference in effect size as true score variance. Both are strawmen that were knocked down decades ago (Schmidt et al., 1985). There is great value in debating whether a researcher should or can argue for generalization, but this debate must start with (a) an accurate portrayal of VG, (b) a discussion of different forms of generalization, and (c) the costs of trying to establish universal thresholds for VG.
Collapse
|
10
|
Wenzel R, Van Quaquebeke N. The Double-Edged Sword of Big Data in Organizational and Management Research. ORGANIZATIONAL RESEARCH METHODS 2017. [DOI: 10.1177/1094428117718627] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
While many disciplines embrace the possibilities that Big Data present for advancing scholarship and practice, organizational and management research has yet to realize Big Data’s potential. In an effort to chart this newfound territory, we briefly describe the principal drivers and key characteristics of Big Data. We then review a broad range of opportunities and risks that are related to the Big Data paradigm, the data itself, and the associated analytical methods. For each, we provide research ideas and recommendations on how to embrace the potentials or address the concerns. Our assessment shows that Big Data, as a paradigm, can be a double- edged sword, capable of significantly advancing our field but also causing backlash if not utilized properly. Our review seeks to inform individual research practices as well as a broader policy agenda in order to advance organizational and management research as a scientifically rigorous and professionally relevant field.
Collapse
Affiliation(s)
- Ramon Wenzel
- Business School, University of Western Australia, Perth, WA, Australia
| | | |
Collapse
|
11
|
Amrhein V, Korner-Nievergelt F, Roth T. The earth is flat ( p > 0.05): significance thresholds and the crisis of unreplicable research. PeerJ 2017; 5:e3544. [PMID: 28698825 PMCID: PMC5502092 DOI: 10.7717/peerj.3544] [Citation(s) in RCA: 144] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2017] [Accepted: 06/14/2017] [Indexed: 11/25/2022] Open
Abstract
The widespread use of 'statistical significance' as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (according to the American Statistical Association). We review why degrading p-values into 'significant' and 'nonsignificant' contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p-values at face value, but mistrust results with larger p-values. In either case, p-values tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance (p ≤ 0.05) is hardly replicable: at a good statistical power of 80%, two studies will be 'conflicting', meaning that one is significant and the other is not, in one third of the cases if there is a true effect. A replication can therefore not be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgment based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to selective reporting and to publication bias against nonsignificant findings. Data dredging, p-hacking, and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p-values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p-values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that 'there is no effect'. Information on possible true effect sizes that are compatible with the data must be obtained from the point estimate, e.g., from a sample average, and from the interval estimate, such as a confidence interval. We review how confusion about interpretation of larger p-values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, for example that decision rules should rather be more stringent, that sample sizes could decrease, or that p-values should better be completely abandoned. We conclude that whatever method of statistical inference we use, dichotomous threshold thinking must give way to non-automated informed judgment.
Collapse
Affiliation(s)
- Valentin Amrhein
- Zoological Institute, University of Basel, Basel, Switzerland
- Research Station Petite Camargue Alsacienne, Saint-Louis, France
- Swiss Ornithological Institute, Sempach, Switzerland
| | | | - Tobias Roth
- Zoological Institute, University of Basel, Basel, Switzerland
- Research Station Petite Camargue Alsacienne, Saint-Louis, France
| |
Collapse
|
12
|
Goering DD, Shimazu A, Zhou F, Wada T, Sakai R. Not if, but how they differ: A meta-analytic test of the nomological networks of burnout and engagement. ACTA ACUST UNITED AC 2017. [DOI: 10.1016/j.burn.2017.05.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
13
|
Realizing the full potential of psychometric meta-analysis for a cumulative science and practice of human resource management. HUMAN RESOURCE MANAGEMENT REVIEW 2017. [DOI: 10.1016/j.hrmr.2016.09.011] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
14
|
Lander MW, Heugens PP. Better Together: Using Meta-Analysis to Explore Complementarities between Ecological and Institutional Theories of Organization. ORGANIZATION STUDIES 2017. [DOI: 10.1177/0170840616677629] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
While sharing intellectual ancestry, organizational ecology and institutionalism are rarely used conjointly to explain population dynamics. A rapprochement would nevertheless be fruitful, as the parsimonious models developed by ecologists are better able to explain organizational founding and failure when enriched with institutional variables. We present a meta-analysis of density dependence theory, which predicts a non-monotonic relationship between population density and organizational vital events. We show that ecology and institutionalism are ‘better together’ by extending this ecological framework in four institutionalism-inspired ways. First, we show that the effects of density on organizational vital rates are moderated by two conceptions of time: ecological ‘clocks’ and institutional ‘eras’. Second, we argue that the socio-political legitimacy of organizational forms, a concept with strong institutional roots, exacerbates density-related founding while attenuating failure. Third, we illustrate how the emergence of prototypical categories in organizational fields can increase the magnitude of density effects. Fourth, we highlight how these socio-political legitimacy and categorization effects are conditioned by ecological clock time. We close by proposing a concise agenda for future research, aimed at finding a better balance between the generality and explanatory power of our most trusted organizational theories.
Collapse
|
15
|
Rao CR, Lovric MM. Testing Point Null Hypothesis of a Normal Mean and the Truth: 21st Century Perspective. JOURNAL OF MODERN APPLIED STATISTICAL METHODS 2016. [DOI: 10.22237/jmasm/1478001660] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
16
|
Kepes S, McDaniel MA. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance. PLoS One 2015; 10:e0141468. [PMID: 26517553 PMCID: PMC4627756 DOI: 10.1371/journal.pone.0141468] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2015] [Accepted: 10/08/2015] [Indexed: 11/19/2022] Open
Abstract
Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.
Collapse
Affiliation(s)
- Sven Kepes
- Department of Management, School of Business, Virginia Commonwealth University, Richmond, Virginia, United States of America
- * E-mail:
| | - Michael A. McDaniel
- Department of Management, School of Business, Virginia Commonwealth University, Richmond, Virginia, United States of America
| |
Collapse
|
17
|
Bosco FA, Aguinis H, Field JG, Pierce CA, Dalton DR. HARKing's Threat to Organizational Research: Evidence From Primary and Meta-Analytic Sources. PERSONNEL PSYCHOLOGY 2015. [DOI: 10.1111/peps.12111] [Citation(s) in RCA: 85] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
18
|
Perezgonzalez JD. Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing. Front Psychol 2015; 6:223. [PMID: 25784889 PMCID: PMC4347431 DOI: 10.3389/fpsyg.2015.00223] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 02/13/2015] [Indexed: 11/21/2022] Open
Abstract
Despite frequent calls for the overhaul of null hypothesis significance testing (NHST), this controversial procedure remains ubiquitous in behavioral, social and biomedical teaching and research. Little change seems possible once the procedure becomes well ingrained in the minds and current practice of researchers; thus, the optimal opportunity for such change is at the time the procedure is taught, be this at undergraduate or at postgraduate levels. This paper presents a tutorial for the teaching of data testing procedures, often referred to as hypothesis testing theories. The first procedure introduced is Fisher's approach to data testing—tests of significance; the second is Neyman-Pearson's approach—tests of acceptance; the final procedure is the incongruent combination of the previous two theories into the current approach—NSHT. For those researchers sticking with the latter, two compromise solutions on how to improve NHST conclude the tutorial.
Collapse
|
19
|
Kepes S, McDaniel MA. How Trustworthy Is the Scientific Literature in Industrial and Organizational Psychology? INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/iops.12045] [Citation(s) in RCA: 95] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The trustworthiness of research findings has been questioned in many domains of science. This article calls for a review of the trustworthiness of the scientific literature in industrial–organizational (I–O) psychology and a reconsideration of common practices that may harm the credibility of our literature. We note that most hypotheses in I–O psychology journals are confirmed. Thus, we are either approaching omniscience or our journals are publishing an unrepresentative sample of completed research. We view the latter explanation as more likely. We review structural problems in the publication process and in the conduct of research that is likely to promote a distortion of scientific knowledge. We then offer recommendations to make the I–O literature more accurate and trustworthy.
Collapse
|
20
|
Schmidt GB, Landers RN. Solving the Replication Problem in Psychology Requires Much More Than a Website. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2015. [DOI: 10.1111/iops.12056] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
21
|
McCarter MW, Samek A, Sheremeta RM. Divided Loyalists or Conditional Cooperators? Creating Consensus About Cooperation in Multiple Simultaneous Social Dilemmas. GROUP & ORGANIZATION MANAGEMENT 2014. [DOI: 10.1177/1059601114551606] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The current social dilemma literature lacks theoretical consensus regarding how individuals behave when facing multiple simultaneous social dilemmas. The divided-loyalty hypothesis, from organizational theory, predicts that cooperation will decline as individuals experience multiple social dilemmas with different compared to the same group members. The conditional-cooperation hypothesis, from behavioral economics, predicts that cooperation will increase as individuals experience multiple social dilemmas with different compared to the same group members. We employ a laboratory experiment to create consensus between these literatures and find support for the conditional-cooperation hypothesis. The positive effect of interacting with different group members comes from participants having an opportunity to shift their cooperative behavior from the less cooperative to the more cooperative group.
Collapse
Affiliation(s)
- Matthew W. McCarter
- University of Texas, San Antonio, USA
- Economic Science Institute, Chapman University, Orange, CA, USA
| | | | - Roman M. Sheremeta
- Case Western Reserve University, Cleveland, OH, USA
- Economic Science Institute, Chapman University, Orange, CA, USA
| |
Collapse
|
22
|
Aguinis H, Vandenberg RJ. An Ounce of Prevention Is Worth a Pound of Cure: Improving Research Quality Before Data Collection. ANNUAL REVIEW OF ORGANIZATIONAL PSYCHOLOGY AND ORGANIZATIONAL BEHAVIOR 2014. [DOI: 10.1146/annurev-orgpsych-031413-091231] [Citation(s) in RCA: 172] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We rely on classic as well as recently published sources to offer a review of theory, research design, and measurement issues that should be considered prior to conducting any empirical study. First, we examine theory-related issues that should be addressed before research design and measurement considerations. Specifically, we discuss how to make meaningful theoretical progress including the use of inductive and deductive approaches, address an important issue, and conduct research with a practical end in mind. Second, we offer recommendations regarding research design, including how to address the low statistical power challenge, design studies that strengthen inferences about causal relationships, and use control variables appropriately. Finally, we address measurement issues. Specifically, we discuss how to improve the link between underlying constructs and their observable indicators. Our review offers a checklist for use by researchers to improve research quality prior to data collection and by journal editors and reviewers to evaluate the quality of submitted manuscripts.
Collapse
Affiliation(s)
- Herman Aguinis
- Department of Management and Entrepreneurship, Kelley School of Business, Indiana University, Bloomington, Indiana 47405
| | - Robert J. Vandenberg
- Department of Management, Terry College of Business, The University of Georgia, Athens, Georgia 30602
| |
Collapse
|
23
|
Doh JP, Lucea R. So Close Yet So Far: Integrating Global Strategy and Nonmarket Research. GLOBAL STRATEGY JOURNAL 2013. [DOI: 10.1111/j.2042-5805.2013.01053.x] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Jonathan P. Doh
- Villanova School of Business; Villanova University; Villanova; Pennsylvania; U.S.A
| | - Rafael Lucea
- GW School of Business; The George Washington University; Washington, D.C.; U.S.A
| |
Collapse
|
24
|
Affiliation(s)
- John K. Kruschke
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Herman Aguinis
- Department of Management and Entrepreneurship, Kelley School of Business, Indiana University, Bloomington, IN, USA
| | - Harry Joo
- Department of Management and Entrepreneurship, Kelley School of Business, Indiana University, Bloomington, IN, USA
| |
Collapse
|
25
|
Kepes S, Banks GC, McDaniel M, Whetzel DL. Publication Bias in the Organizational Sciences. ORGANIZATIONAL RESEARCH METHODS 2012. [DOI: 10.1177/1094428112452760] [Citation(s) in RCA: 238] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Affiliation(s)
- Sven Kepes
- Virginia Commonwealth University, Richmond, VA, USA
| | | | | | | |
Collapse
|