1
|
Akbaş KE, Hark BD. Evaluation of quantitative bias analysis in epidemiological research: A systematic review from 2010 to mid-2023. J Eval Clin Pract 2024; 30:1413-1421. [PMID: 39031561 DOI: 10.1111/jep.14065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/17/2024] [Accepted: 06/03/2024] [Indexed: 07/22/2024]
Abstract
OBJECTIVE We aimed to demonstrate the use of quantitative bias analysis (QBA), which reveals the effects of systematic error, including confounding, misclassification and selection bias, on study results in epidemiological studies published in the period from 2010 to mid-23. METHOD The articles identified through a keyword search using Pubmed and Scopus were included in the study. The articles obtained from this search were eliminated according to the exclusion criteria, and the articles in which QBA analysis was applied were included in the detailed evaluation. RESULTS It can be said that the application of QBA analysis has gradually increased over the 13-year period. Accordingly, the number of articles in which simple is used as a method in QBA analysis is 9 (9.89%), the number of articles in which the multidimensional approach is used is 10 (10.99%), the number of articles in which the probabilistic approach is used is 60 (65.93%) and the number of articles in which the method is not specified is 12 (13.19%). The number of articles with misclassification bias model is 44 (48.35%), the number of articles with uncontrolled confounder(s) bias model is 32 (35.16%), the number of articles with selection bias model is 7 (7.69%) and the number of articles using more than one bias model is 8 (8.79%). Of the 49 (53.85%) articles in which the bias parameter source was specified, 19 (38.78%) used internal validation, 26 (53.06%) used external validation and 4 (8.16%) used educated guess, data constraints and hypothetical data. Probabilistic approach was used as a bias method in 60 (65.93%) of the articles, and mostly beta (8 [13.33%)], normal (9 [15.00%]) and uniform (8 [13.33%]) distributions were selected. CONCLUSION The application of QBA is rare in the literature but is increasing over time. Future researchers should include detailed analyzes such as QBA analysis to obtain inferences with higher evidence value, taking into account systematic errors.
Collapse
Affiliation(s)
- Kübra Elif Akbaş
- Department of Biostatistics and Medical Informatics, Faculty of Medicine, Fırat University, Elazig, Turkey
| | - Betül Dağoğlu Hark
- Department of Biostatistics and Medical Informatics, Faculty of Medicine, Fırat University, Elazig, Turkey
| |
Collapse
|
2
|
Karipidis K, Baaken D, Loney T, Blettner M, Brzozek C, Elwood M, Narh C, Orsini N, Röösli M, Paulo MS, Lagorio S. The effect of exposure to radiofrequency fields on cancer risk in the general and working population: A systematic review of human observational studies - Part I: Most researched outcomes. ENVIRONMENT INTERNATIONAL 2024; 191:108983. [PMID: 39241333 DOI: 10.1016/j.envint.2024.108983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 08/09/2024] [Accepted: 08/22/2024] [Indexed: 09/09/2024]
Abstract
BACKGROUND The objective of this review was to assess the quality and strength of the evidence provided by human observational studies for a causal association between exposure to radiofrequency electromagnetic fields (RF-EMF) and risk of the most investigated neoplastic diseases. METHODS Eligibility criteria: We included cohort and case-control studies of neoplasia risks in relation to three types of exposure to RF-EMF: near-field, head-localized, exposure from wireless phone use (SR-A); far-field, whole body, environmental exposure from fixed-site transmitters (SR-B); near/far-field occupational exposures from use of hand-held transceivers or RF-emitting equipment in the workplace (SR-C). While no restrictions on tumour type were applied, in the current paper we focus on incidence-based studies of selected "critical" neoplasms of the central nervous system (brain, meninges, pituitary gland, acoustic nerve) and salivary gland tumours (SR-A); brain tumours and leukaemias (SR-B, SR-C). We focussed on investigations of specific neoplasms in relation to specific exposure sources (i.e. E-O pairs), noting that a single article may address multiple E-O pairs. INFORMATION SOURCES Eligible studies were identified by literature searches through Medline, Embase, and EMF-Portal. Risk-of-bias (RoB) assessment: We used a tailored version of the Office of Health Assessment and Translation (OHAT) RoB tool to evaluate each study's internal validity. At the summary RoB step, studies were classified into three tiers according to their overall potential for bias (low, moderate and high). DATA SYNTHESIS We synthesized the study results using random effects restricted maximum likelihood (REML) models (overall and subgroup meta-analyses of dichotomous and categorical exposure variables), and weighted mixed effects models (dose-response meta-analyses of lifetime exposure intensity). Evidence assessment: Confidence in evidence was assessed using the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach. RESULTS We included 63 aetiological articles, published between 1994 and 2022, with participants from 22 countries, reporting on 119 different E-O pairs. RF-EMF exposure from mobile phones (ever or regular use vs no or non-regular use) was not associated with an increased risk of glioma [meta-estimate of the relative risk (mRR) = 1.01, 95 % CI = 0.89-1.13), meningioma (mRR = 0.92, 95 % CI = 0.82-1.02), acoustic neuroma (mRR = 1.03, 95 % CI = 0.85-1.24), pituitary tumours (mRR = 0.81, 95 % CI = 0.61-1.06), salivary gland tumours (mRR = 0.91, 95 % CI = 0.78-1.06), or paediatric (children, adolescents and young adults) brain tumours (mRR = 1.06, 95 % CI = 0.74-1.51), with variable degree of across-study heterogeneity (I2 = 0 %-62 %). There was no observable increase in mRRs for the most investigated neoplasms (glioma, meningioma, and acoustic neuroma) with increasing time since start (TSS) use of mobile phones, cumulative call time (CCT), or cumulative number of calls (CNC). Cordless phone use was not significantly associated with risks of glioma [mRR = 1.04, 95 % CI = 0.74-1.46; I2 = 74 %) meningioma, (mRR = 0.91, 95 % CI = 0.70-1.18; I2 = 59 %), or acoustic neuroma (mRR = 1.16; 95 % CI = 0.83-1.61; I2 = 63 %). Exposure from fixed-site transmitters (broadcasting antennas or base stations) was not associated with childhood leukaemia or paediatric brain tumour risks, independently of the level of the modelled RF exposure. Glioma risk was not significantly increased following occupational RF exposure (ever vs never), and no differences were detected between increasing categories of modelled cumulative exposure levels. DISCUSSION In the sensitivity analyses of glioma, meningioma, and acoustic neuroma risks in relation to mobile phone use (ever use, TSS, CCT, and CNC) the presented results were robust and not affected by changes in study aggregation. In a leave-one-out meta-analyses of glioma risk in relation to mobile phone use we identified one influential study. In subsequent meta-analyses performed after excluding this study, we observed a substantial reduction in the mRR and the heterogeneity between studies, for both the contrast Ever vs Never (regular) use (mRR = 0.96, 95 % CI = 0.87-1.07, I2 = 47 %), and in the analysis by increasing categories of TSS ("<5 years": mRR = 0.97, 95 % CI = 0.83-1.14, I2 = 41 %; "5-9 years ": mRR = 0.96, 95 % CI = 0.83-1.11, I2 = 34 %; "10+ years": mRR = 0.97, 95 % CI = 0.87-1.08, I2 = 10 %). There was limited variation across studies in RoB for the priority domains (selection/attrition, exposure and outcome information), with the number of studies evenly classified as at low and moderate risk of bias (49 % tier-1 and 51 % tier-2), and no studies classified as at high risk of bias (tier-3). The impact of the biases on the study results (amount and direction) proved difficult to predict, and the RoB tool was inherently unable to account for the effect of competing biases. However, the sensitivity meta-analyses stratified on bias-tier, showed that the heterogeneity observed in our main meta-analyses across studies of glioma and acoustic neuroma in the upper TSS stratum (I2 = 77 % and 76 %), was explained by the summary RoB-tier. In the tier-1 study subgroup, the mRRs (95 % CI; I2) in long-term (10+ years) users were 0.95 (0.85-1.05; 5.5 %) for glioma, and 1.00 (0.78-1.29; 35 %) for acoustic neuroma. The time-trend simulation studies, evaluated as complementary evidence in line with a triangulation approach for external validity, were consistent in showing that the increased risks observed in some case-control studies were incompatible with the actual incidence rates of glioma/brain cancer observed in several countries and over long periods. Three of these simulation studies consistently reported that RR estimates > 1.5 with a 10+ years induction period were definitely implausible, and could be used to set a "credibility benchmark". In the sensitivity meta-analyses of glioma risk in the upper category of TSS excluding five studies reporting implausible effect sizes, we observed strong reductions in both the mRR [mRR of 0.95 (95 % CI = 0.86-1.05)], and the degree of heterogeneity across studies (I2 = 3.6 %). CONCLUSIONS Consistently with the published protocol, our final conclusions were formulated separately for each exposure-outcome combination, and primarily based on the line of evidence with the highest confidence, taking into account the ranking of RF sources by exposure level as inferred from dosimetric studies, and the external coherence with findings from time-trend simulation studies (limited to glioma in relation to mobile phone use). For near field RF-EMF exposure to the head from mobile phone use, there was moderate certainty evidence that it likely does not increase the risk of glioma, meningioma, acoustic neuroma, pituitary tumours, and salivary gland tumours in adults, or of paediatric brain tumours. For near field RF-EMF exposure to the head from cordless phone use, there was low certainty evidence that it may not increase the risk of glioma, meningioma or acoustic neuroma. For whole-body far-field RF-EMF exposure from fixed-site transmitters (broadcasting antennas or base stations), there was moderate certainty evidence that it likely does not increase childhood leukaemia risk and low certainty evidence that it may not increase the risk of paediatric brain tumours. There were no studies eligible for inclusion investigating RF-EMF exposure from fixed-site transmitters and critical tumours in adults. For occupational RF-EMF exposure, there was low certainty evidence that it may not increase the risk of brain cancer/glioma, but there were no included studies of leukemias (the second critical outcome in SR-C). The evidence rating regarding paediatric brain tumours in relation to environmental RF exposure from fixed-site transmitters should be interpreted with caution, due to the small number of studies. Similar interpretative cautions apply to the evidence rating of the relation between glioma/brain cancer and occupational RF exposure, due to differences in exposure sources and metrics across the few included studies. OTHER This project was commissioned and partially funded by the World Health Organization (WHO). Co-financing was provided by the New Zealand Ministry of Health; the Istituto Superiore di Sanità in its capacity as a WHO Collaborating Centre for Radiation and Health; and ARPANSA as a WHO Collaborating Centre for Radiation Protection. REGISTRATION PROSPERO CRD42021236798. Published protocol: [(Lagorio et al., 2021) DOI https://doi.org/10.1016/j.envint.2021.106828].
Collapse
Affiliation(s)
- Ken Karipidis
- Australian Radiation Protection and Nuclear Safety Agency (ARPANSA), Yallambie, VIC, Australia.
| | - Dan Baaken
- Competence Center for Electromagnetic Fields, Federal Office for Radiation Protection (BfS), Cottbus, Germany; Institute of Medical Biostatistics, Epidemiology and Informatics (IMBEI), University of Mainz, Germany(1)
| | - Tom Loney
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai Health, Dubai, United Arab Emirates
| | - Maria Blettner
- Institute of Medical Biostatistics, Epidemiology and Informatics (IMBEI), University of Mainz, Germany(1)
| | - Chris Brzozek
- Australian Radiation Protection and Nuclear Safety Agency (ARPANSA), Yallambie, VIC, Australia
| | - Mark Elwood
- Epidemiology and Biostatistics, School of Population Health, University of Auckland, New Zealand
| | - Clement Narh
- Department of Epidemiology and Biostatistics, School of Public Health (Hohoe Campus), University of Health and Allied Sciences, PMB31 Ho, Ghana
| | - Nicola Orsini
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden
| | - Martin Röösli
- Swiss Tropical and Public Health Institute, Basel, Switzerland; University of Basel, Basel, Switzerland
| | - Marilia Silva Paulo
- Comprehensive Health Research Center, NOVA Medical School, Universidad NOVA de Lisboa, Portugal
| | - Susanna Lagorio
- Department of Oncology and Molecular Medicine, National Institute of Health (Istituto Superiore di Sanità), Rome, Italy(1)
| |
Collapse
|
3
|
Abstract
Observational research provides valuable opportunities to advance oral health science but is limited by vulnerabilities to systematic bias, including unmeasured confounding, errors in variable measurement, or bias in the creation of study populations and/or analytic samples. The potential influence of systematic biases on observed results is often only briefly mentioned among the discussion of limitations of a given study, despite existing methods that support detailed assessments of their potential effects. Quantitative bias analysis is a set of methodological techniques that, when applied to observational data, can provide important context to aid in the interpretation and integration of observational research findings into the broader body of oral health research. Specifically, these methods were developed to provide quantitative estimates of the potential magnitude and direction of the influence of systematic biases on observed results. We aim to encourage and facilitate the broad adoption of quantitative bias analyses into observational oral health research. To this end, we provide an overview of quantitative bias analysis techniques, including a step-by-step implementation guide. We also provide a detailed appendix that guides readers through an applied example using real data obtained from a prospective observational cohort study of preconception periodontitis in relation to time to pregnancy. Quantitative bias analysis methods are available to all investigators. When appropriately applied to observational studies, findings from such studies can have a greater impact in the broader research context.
Collapse
Affiliation(s)
- J.C. Bond
- Department of Health Policy and Health Services Research, Boston University Henry M. Goldman School of Dental Medicine, Boston, MA, USA
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - M.P. Fox
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
- Department of Global Health, Boston University School of Public Health, Boston, MA, USA
| | - L.A. Wise
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - B. Heaton
- Department of Health Policy and Health Services Research, Boston University Henry M. Goldman School of Dental Medicine, Boston, MA, USA
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| |
Collapse
|
4
|
Fox MP, MacLehose RF, Lash TL. SAS and R code for probabilistic quantitative bias analysis for misclassified binary variables and binary unmeasured confounders. Int J Epidemiol 2023; 52:1624-1633. [PMID: 37141446 PMCID: PMC10555728 DOI: 10.1093/ije/dyad053] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 04/18/2023] [Indexed: 05/06/2023] Open
Abstract
Systematic error from selection bias, uncontrolled confounding, and misclassification is ubiquitous in epidemiologic research but is rarely quantified using quantitative bias analysis (QBA). This gap may in part be due to the lack of readily modifiable software to implement these methods. Our objective is to provide computing code that can be tailored to an analyst's dataset. We briefly describe the methods for implementing QBA for misclassification and uncontrolled confounding and present the reader with example code for how such bias analyses, using both summary-level data and individual record-level data, can be implemented in both SAS and R. Our examples show how adjustment for uncontrolled confounding and misclassification can be implemented. Resulting bias-adjusted point estimates can then be compared to conventional results to see the impact of this bias in terms of its direction and magnitude. Further, we show how 95% simulation intervals can be generated that can be compared to conventional 95% confidence intervals to see the impact of the bias on uncertainty. Having easy to implement code that users can apply to their own datasets will hopefully help spur more frequent use of these methods and prevent poor inferences drawn from studies that do not quantify the impact of systematic error on their results.
Collapse
Affiliation(s)
- Matthew P Fox
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
- Department of Global Health, Boston University School of Public Health, Boston, MA, USA
| | - Richard F MacLehose
- Department of Epidemiology, University of Minnesota School of Public Health, University of Minnesota, Minneapolis, MN, USA
| | - Timothy L Lash
- Department of Epidemiology, Rollins School of Public Health, Emory University, Boston, MA, USA
| |
Collapse
|
5
|
Brendel P, Torres A, Arah OA. Simultaneous adjustment of uncontrolled confounding, selection bias and misclassification in multiple-bias modelling. Int J Epidemiol 2023; 52:1220-1230. [PMID: 36718093 PMCID: PMC10893963 DOI: 10.1093/ije/dyad001] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 01/23/2023] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Adjusting for multiple biases usually involves adjusting for one bias at a time, with careful attention to the order in which these biases are adjusted. A novel, alternative approach to multiple-bias adjustment involves the simultaneous adjustment of all biases via imputation and/or regression weighting. The imputed value or weight corresponds to the probability of the missing data and serves to 'reconstruct' the unbiased data that would be observed based on the provided assumptions of the degree of bias. METHODS We motivate and describe the steps necessary to implement this method. We also demonstrate the validity of this method through a simulation study with an exposure-outcome relationship that is biased by uncontrolled confounding, exposure misclassification, and selection bias. RESULTS The study revealed that a non-biased effect estimate can be obtained when correct bias parameters are applied. It also found that incorrect specification of every bias parameter by +/-25% still produced an effect estimate with less bias than the observed, biased effect. CONCLUSIONS Simultaneous multi-bias analysis is a useful way of investigating and understanding how multiple sources of bias may affect naive effect estimates. This new method can be used to enhance the validity and transparency of real-world evidence obtained from observational, longitudinal studies.
Collapse
Affiliation(s)
- Paul Brendel
- Department of Epidemiology, Fielding School of Public Health, UCLA, Los Angeles, CA, USA
- Valo Health, Boston, MA, USA
| | | | - Onyebuchi A Arah
- Department of Epidemiology, Fielding School of Public Health, UCLA, Los Angeles, CA, USA
- Department of Statistics, College of Letters and Science, UCLA, Los Angeles, CA, USA
- Department of Public Health, Section for Epidemiology, Aarhus University, Aarhus, Denmark
| |
Collapse
|
6
|
Raittio E, Sofi-Mahmudi A, Shamsoddin E. The use of the phrase "data not shown" in dental research. PLoS One 2022; 17:e0272695. [PMID: 35944050 PMCID: PMC9362922 DOI: 10.1371/journal.pone.0272695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 07/15/2022] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE The use of phrases such as "data/results not shown" is deemed an obscure way to represent scientific findings. Our aim was to investigate how frequently papers published in dental journals use the phrases and what kind of results the authors referred to with these phrases in 2021. METHODS We searched the Europe PubMed Central (PMC) database for open-access articles available from studies published in PubMed-indexed dental journals until December 31st, 2021. We searched for "data/results not shown" phrases from the full texts and then calculated the proportion of articles with the phrases in all the available articles. From studies published in 2021, we evaluated whether the phrases referred to confirmatory results, negative results, peripheral results, sensitivity analysis results, future results, or other/unclear results. Journal- and publisher-related differences in publishing studies with the phrases in 2021 were tested with Fisher's exact test using the R v4.1.1 software. RESULTS The percentage of studies with the relevant phrases from the total number of studies in the database decreased from 13% to 3% between 2010 and 2020. In 2021, out of 2,434 studies published in 73 different journals by eight publishers, 67 (2.8%) used the phrases. Potential journal- and publisher-related differences in publishing studies with the phrases were detected in 2021 (p = 0.001 and p = 0.005, respectively). Most commonly, the phrases referred to negative (n = 16, 24%), peripheral (n = 22, 33%) or confirmatory (n = 11, 16%) results. The significance of unpublished results to which the phrases referred considerably varied across studies. CONCLUSION Over the last decade, there has been a marked decrease in the use of the phrases "data/results not shown" in dental journals. However, the phrases were still notably in use in dental studies in 2021, despite the good availability of accessible free online supplements and repositories.
Collapse
Affiliation(s)
- Eero Raittio
- Institute of Dentistry, University of Eastern Finland, Kuopio, Finland
| | - Ahmad Sofi-Mahmudi
- Cochrane Iran Associate Centre, National Institute for Medical Research Development (NIMAD), Tehran, Iran
- Seqiz Health Network, Kurdistan University of Medical Sciences, Seqiz, Kurdistan, Iran
| | - Erfan Shamsoddin
- Cochrane Iran Associate Centre, National Institute for Medical Research Development (NIMAD), Tehran, Iran
| |
Collapse
|
7
|
Innes GK, Bhondoekhan F, Lau B, Gross AL, Ng DK, Abraham AG. The Measurement Error Elephant in the Room: Challenges and Solutions to Measurement Error in Epidemiology. Epidemiol Rev 2022; 43:94-105. [PMID: 34664648 PMCID: PMC9005058 DOI: 10.1093/epirev/mxab011] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 09/30/2021] [Accepted: 10/06/2021] [Indexed: 11/12/2022] Open
Abstract
Measurement error, although ubiquitous, is uncommonly acknowledged and rarely assessed or corrected in epidemiologic studies. This review offers a straightforward guide to common problems caused by measurement error in research studies and a review of several accessible bias-correction methods for epidemiologists and data analysts. Although most correction methods require criterion validation including a gold standard, there are also ways to evaluate the impact of measurement error and potentially correct for it without such data. Technical difficulty ranges from simple algebra to more complex algorithms that require expertise, fine tuning, and computational power. However, at all skill levels, software packages and methods are available and can be used to understand the threat to inferences that arises from imperfect measurements.
Collapse
Affiliation(s)
| | | | | | | | | | - Alison G Abraham
- Correspondence to Dr. Alison G. Abraham, Department of Epidemiology, University of Colorado, Anschutz Medical Campus, 1635 Aurora Ct, Aurora, CO 80045 (e-mail: )
| |
Collapse
|
8
|
Chaleplioglou A, Koulouris A. Preprint paper platforms in the academic scholarly communication environment. JOURNAL OF LIBRARIANSHIP AND INFORMATION SCIENCE 2021. [DOI: 10.1177/09610006211058908] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Academic scholarly communication is the predominant business of researchers, scientists, and scholars. It is the core element of promoting scientific thought, investigation, and building up solid knowledge. The development of preprint platform web interfaces, server repositories of electronic scholarly papers submitted by their authors and openly available to the scientific community proposed a new form of academic communication. The distribution of a preprint of a scientific manuscript allows the authors to claim the priority of discovery, in a manner similar to the conference proceedings output, but also creates an anteriority that prevents protection by a patent application. Herein, we review the scope and the role of preprint papers platforms in academia, we explore individual cases, arXiv, SSRN, OSF Preprints, HAL, bioRxiv, EconStor, RePEc, PhilArchive, Research Square, viXra, Cryptology ePrint Archive, Preprints.org, ChinaXiv, medRxiv, JMIR Preprints, Authorea, ChemRxiv, engrXiv, e-LiS, SciELO, PsyArXiv, F1000 Research, and Zenodo, and discuss their significance in promoting scientific discovery, the potential risks of scientific integrity, as well as the policies of data distribution and intellectual property rights, the plus and minus, for the stakeholders, authors, institutions, states, scientific journals, scientific community, and the public. In this review we explore the scope and policies of the existing preprint papers platforms in different academic research fields.
Collapse
Affiliation(s)
- Artemis Chaleplioglou
- University of West Attica, Greece
- Biomedical Research Foundation of the Academy of Athens, Greece
| | | |
Collapse
|
9
|
Weinstein R, Parikh-Das AM, Salonga R, Schuemie M, Ryan PB, Atillasoy E, Hermanowski-Vosatka A, Eichenbaum G, Berlin JA. A systematic assessment of the epidemiologic literature regarding an association between acetaminophen exposure and cancer. Regul Toxicol Pharmacol 2021; 127:105043. [PMID: 34517075 DOI: 10.1016/j.yrtph.2021.105043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 08/02/2021] [Accepted: 09/03/2021] [Indexed: 01/05/2023]
Abstract
Introduced in the 1950s, acetaminophen is one of the most widely used antipyretics and analgesics worldwide. In 1999, the International Agency for Research on Cancer (IARC) reviewed the epidemiologic studies of acetaminophen and the data were judged to be "inadequate" to conclude that it is carcinogenic. In 2019 the California Office of Environmental Health Hazard Assessment initiated a review process on the carcinogenic hazard potential of acetaminophen. To inform this review process, the authors performed a comprehensive literature search and identified 136 epidemiologic studies, which for most cancer types suggest no alteration in risk associated with acetaminophen use. For 3 cancer types, renal cell, liver, and some forms of lymphohematopoietic, some studies suggest an increased risk; however, multiple factors unique to acetaminophen need to be considered to determine if these results are real and clinically meaningful. The objective of this publication is to analyze the results of these epidemiologic studies using a framework that accounts for the inherent challenge of evaluating acetaminophen, including, broad population-wide use in multiple disease states, challenges with exposure measurement, protopathic bias, channeling bias, and recall bias. When evaluated using this framework, the data do not support a causal association between acetaminophen use and cancer.
Collapse
Affiliation(s)
| | | | | | | | | | - Evren Atillasoy
- Johnson & Johnson Consumer Products US, Fort Washington, PA, USA
| | | | | | | |
Collapse
|
10
|
Gray CM, Grimson F, Layton D, Pocock S, Kim J. A Framework for Methodological Choice and Evidence Assessment for Studies Using External Comparators from Real-World Data. Drug Saf 2021; 43:623-633. [PMID: 32440847 PMCID: PMC7305259 DOI: 10.1007/s40264-020-00944-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Several approaches have been proposed recently to accelerate the pathway from drug discovery to patient access. These include novel designs such as using controls external to the clinical trial where standard randomised controls are not feasible. In parallel, there has been rapid growth in the application of routinely collected healthcare ‘real-world’ data for post-market safety and effectiveness studies. Thus, using real-world data to establish an external comparator arm in clinical trials is a natural next step. Regulatory authorities have begun to endorse the use of external comparators in certain circumstances, with some positive outcomes for new drug approvals. Given the potential to introduce bias associated with observational studies, there is a need for recommendations on how external comparators should be best used. In this article, we propose an evaluation framework for real-world data external comparator studies that enables full assessment of available evidence and related bias. We define the principle of exchangeability and discuss the applicability of criteria described by Pocock for consideration of the exchangeability of the external and trial populations. We explore how trial designs using real-world data external comparators fit within the evidence hierarchy and propose a four-step process for good conduct of external comparator studies. This process is intended to maximise the quality of evidence based on careful study design and the combination of covariate balancing, bias analysis and combining outcomes.
Collapse
Affiliation(s)
- Christen M Gray
- EMEA Centre of Excellence for Retrospective Studies, IQVIA, London, UK.
| | - Fiona Grimson
- EMEA Centre of Excellence for Retrospective Studies, IQVIA, London, UK
| | - Deborah Layton
- EMEA Centre of Excellence for Retrospective Studies, IQVIA, London, UK.,School of Pharmacy and Bioengineering, Keele University, Staffordshire, UK.,School of Pharmacy and Biomedical Sciences, University of Portsmouth, Portsmouth, UK
| | - Stuart Pocock
- Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK
| | - Joseph Kim
- EMEA Centre of Excellence for Retrospective Studies, IQVIA, London, UK.,Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK.,School of Pharmacy, University College London, London, UK
| |
Collapse
|
11
|
Harris DA, Sobers M, Greenwald ZR, Simmons AE, Soucy JPR, Rosella LC. Is 3 feet of physical distancing enough? Clin Infect Dis 2021; 74:368-370. [PMID: 33988230 PMCID: PMC8194572 DOI: 10.1093/cid/ciab439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Affiliation(s)
- Daniel A Harris
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Mercedes Sobers
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Zoë R Greenwald
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Alison E Simmons
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Jean-Paul R Soucy
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Laura C Rosella
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| |
Collapse
|
12
|
Petersen JM, Ranker LR, Barnard-Mayers R, MacLehose RF, Fox MP. A systematic review of quantitative bias analysis applied to epidemiological research. Int J Epidemiol 2021; 50:1708-1730. [PMID: 33880532 DOI: 10.1093/ije/dyab061] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/05/2021] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Quantitative bias analysis (QBA) measures study errors in terms of direction, magnitude and uncertainty. This systematic review aimed to describe how QBA has been applied in epidemiological research in 2006-19. METHODS We searched PubMed for English peer-reviewed studies applying QBA to real-data applications. We also included studies citing selected sources or which were identified in a previous QBA review in pharmacoepidemiology. For each study, we extracted the rationale, methodology, bias-adjusted results and interpretation and assessed factors associated with reproducibility. RESULTS Of the 238 studies, the majority were embedded within papers whose main inferences were drawn from conventional approaches as secondary (sensitivity) analyses to quantity-specific biases (52%) or to assess the extent of bias required to shift the point estimate to the null (25%); 10% were standalone papers. The most common approach was probabilistic (57%). Misclassification was modelled in 57%, uncontrolled confounder(s) in 40% and selection bias in 17%. Most did not consider multiple biases or correlations between errors. When specified, bias parameters came from the literature (48%) more often than internal validation studies (29%). The majority (60%) of analyses resulted in >10% change from the conventional point estimate; however, most investigators (63%) did not alter their original interpretation. Degree of reproducibility related to inclusion of code, formulas, sensitivity analyses and supplementary materials, as well as the QBA rationale. CONCLUSIONS QBA applications were rare though increased over time. Future investigators should reference good practices and include details to promote transparency and to serve as a reference for other researchers.
Collapse
Affiliation(s)
- Julie M Petersen
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - Lynsie R Ranker
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - Ruby Barnard-Mayers
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - Richard F MacLehose
- Division of Epidemiology and Community Health, University of Minnesota, School of Public Health, Minneapolis, MN, USA
| | - Matthew P Fox
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA.,Department of Global Health, Boston University School of Public Health, Boston, MA, USA
| |
Collapse
|
13
|
Stocking K, Wilkinson J, Lensen S, Brison DR, Roberts SA, Vail A. Are interventions in reproductive medicine assessed for plausible and clinically relevant effects? A systematic review of power and precision in trials and meta-analyses. Hum Reprod 2020; 34:659-665. [PMID: 30838395 PMCID: PMC6443111 DOI: 10.1093/humrep/dez017] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 12/11/2018] [Accepted: 02/06/2019] [Indexed: 12/26/2022] Open
Abstract
STUDY QUESTION How much statistical power do randomised controlled trials (RCTs) and meta-analyses have to investigate the effectiveness of interventions in reproductive medicine? SUMMARY ANSWER The largest trials in reproductive medicine are unlikely to detect plausible improvements in live birth rate (LBR), and meta-analyses do not make up for this shortcoming. WHAT IS KNOWN ALREADY Effectiveness of interventions is best evaluated using RCTs. In order to be informative, these trials should be designed to have sufficient power to detect the smallest clinically relevant effect. Similar trials can subsequently be pooled in meta-analyses to more precisely estimate treatment effects. STUDY DESIGN, SIZE, DURATION A review of power and precision in 199 RCTs and meta-analyses from 107 Cochrane Reviews was conducted. PARTICIPANTS/MATERIALS, SETTING, METHODS Systematic reviews published by Cochrane Gynaecology and Fertility with the primary outcome live birth were identified. For each live birth (or ongoing pregnancy) meta-analysis and for the largest RCT in each, we calculated the power to detect absolute improvements in LBR of varying sizes. Additionally, the 95% CIs of estimated treatment effects from each meta-analysis and RCT were recorded, as these indicate the precision of the result. MAIN RESULTS AND THE ROLE OF CHANCE Median (interquartile range) power to detect an improvement in LBR of 5 percentage points (pp) (e.g. 25-30%) was 13% (8-21%) for RCTs and 16% (9-33%) for meta-analyses. No RCTs and only 2% of meta-analyses achieved 80% power to detect an improvement of 5 pp. Median power was high (85% for trials and 93% for meta-analyses) only in relation to 20 pp absolute LBR improvement, although substantial numbers of trials and meta-analyses did not achieve 80% power even for this improbably large effect size. Median width of 95% CIs was 25 pp and 21 pp for RCTs and meta-analyses, respectively. We found that 28% of Cochrane Reviews with LBR as the primary outcome contain no live birth (or ongoing pregnancy) data. LARGE-SCALE DATA The data used in this study may be accessed at https://osf.io/852tn/?view_only=90f1579ce72747ccbe572992573197bd. LIMITATIONS, REASONS FOR CAUTION The design and analysis decisions used in this study are predicted to overestimate the power of trials and meta-analyses, and the size of the problem is therefore likely understated. For some interventions, it is possible that larger trials not reporting live birth or ongoing pregnancy have been conducted, which were not included in our sample. In relation to meta-analyses, we calculated power as though all participants were included in a single trial. This ignores heterogeneity between trials in a meta-analysis, and will cause us to overestimate power. WIDER IMPLICATIONS OF THE FINDINGS Trials capable of detecting realistic improvements in LBR are lacking in reproductive medicine, and meta-analyses are not large enough to overcome this deficiency. This situation will lead to unwarranted pessimism as well as unjustified enthusiasm regarding reproductive interventions, neither of which are consistent with the practice of evidence-based medicine or the idea of informed patient choice. However, RCTs and meta-analyses remain vital to establish the effectiveness of fertility interventions. We discuss strategies to improve the evidence base and call for collaborative studies focusing on the most important research questions. STUDY FUNDING/COMPETING INTEREST(S) There was no specific funding for this study. KS and SL declare no conflict of interest. AV consults for the Human Fertilisation and Embryology Authority (HFEA): all fees are paid directly to AV's employer. JW declares that publishing research benefits his career. SR is a Statistical Editor for Human Reproduction. JW and AV are Statistical Editors for Cochrane Gynaecology and Fertility. DRB is funded by the NHS as Scientific Director of a clinical IVF service. PROSPERO REGISTRATION NUMBER None.
Collapse
Affiliation(s)
- K Stocking
- Department of Medical Statistics, Manchester University NHS Foundation Trust, Manchester, UK.,Centre for Biostatistics, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, UK
| | - J Wilkinson
- Centre for Biostatistics, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, UK
| | - S Lensen
- Department of Obstetrics and Gynaecology, University of Auckland, New Zealand.,Medical Research Council Clinical Trials Unit, University College London, London, UK
| | - D R Brison
- Department of Reproductive Medicine, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK.,Maternal and Fetal Health Research Centre, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Sciences Centre, Manchester, UK
| | - S A Roberts
- Centre for Biostatistics, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, UK
| | - A Vail
- Centre for Biostatistics, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester, UK
| |
Collapse
|
14
|
Shaw PA, Gustafson P, Carroll RJ, Deffner V, Dodd KW, Keogh RH, Kipnis V, Tooze JA, Wallace MP, Küchenhoff H, Freedman LS. STRATOS guidance document on measurement error and misclassification of variables in observational epidemiology: Part 2-More complex methods of adjustment and advanced topics. Stat Med 2020; 39:2232-2263. [PMID: 32246531 PMCID: PMC7272296 DOI: 10.1002/sim.8531] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2018] [Revised: 02/27/2020] [Accepted: 02/28/2020] [Indexed: 12/24/2022]
Abstract
We continue our review of issues related to measurement error and misclassification in epidemiology. We further describe methods of adjusting for biased estimation caused by measurement error in continuous covariates, covering likelihood methods, Bayesian methods, moment reconstruction, moment-adjusted imputation, and multiple imputation. We then describe which methods can also be used with misclassification of categorical covariates. Methods of adjusting estimation of distributions of continuous variables for measurement error are then reviewed. Illustrative examples are provided throughout these sections. We provide lists of available software for implementing these methods and also provide the code for implementing our examples in the Supporting Information. Next, we present several advanced topics, including data subject to both classical and Berkson error, modeling continuous exposures with measurement error, and categorical exposures with misclassification in the same model, variable selection when some of the variables are measured with error, adjusting analyses or design for error in an outcome variable, and categorizing continuous variables measured with error. Finally, we provide some advice for the often met situations where variables are known to be measured with substantial error, but there is only an external reference standard or partial (or no) information about the type or magnitude of the error.
Collapse
Affiliation(s)
- Pamela A Shaw
- Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| | - Paul Gustafson
- Department of Statistics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Raymond J Carroll
- Department of Statistics, Texas A&M University, College Station, Texas, USA
- School of Mathematical and Physical Sciences, University of Technology Sydney, Broadway, New South Wales, Australia
| | - Veronika Deffner
- Statistical Consulting Unit StaBLab, Department of Statistics, Ludwig-Maximilians-Universität, Munich, Germany
| | - Kevin W Dodd
- Biometry Research Group, Division of Cancer Prevention, National Cancer Institute, Bethesda, Maryland, USA
| | - Ruth H Keogh
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK
| | - Victor Kipnis
- Biometry Research Group, Division of Cancer Prevention, National Cancer Institute, Bethesda, Maryland, USA
| | - Janet A Tooze
- Department of Biostatistics and Data Science, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Michael P Wallace
- Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Helmut Küchenhoff
- Statistical Consulting Unit StaBLab, Department of Statistics, Ludwig-Maximilians-Universität, Munich, Germany
| | - Laurence S Freedman
- Biostatistics and Biomathematics Unit, Gertner Institute for Epidemiology and Health Policy Research, Sheba Medical Center, Tel Hashomer, Israel
- Information Management Services Inc., Rockville, Maryland, USA
| |
Collapse
|
15
|
Boffetta P, Farioli A, Rizzello E. Application of epidemiological findings to individuals. LA MEDICINA DEL LAVORO 2020; 111:10-21. [PMID: 32096769 PMCID: PMC7809964 DOI: 10.23749/mdl.v111i1.9055] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 01/10/2020] [Indexed: 11/26/2022]
Abstract
Three types of issues need to be considered in the application of epidemiology results to individuals. First, epidemiology results are subject to random error, and can be applied only to an ideal subject with average values of all variables under study, including potential confounders included in the regression models. Second, the observational nature of epidemiology makes it susceptible to systematic error, and any extrapolation to individuals would mirror the validity of the original results. Quantitative bias analysis has been proposed to assess the likelihood, direction and magnitude of bias, but this has not yet become part of the normal practice of epidemiology. Finally, external validity of the results (i.e., their application to individuals and populations other than those included in the underlying studies) needs to be addressed, including population-based factors, such as heterogeneity in exposure or disease circumstances, and individual-based factors, such as interaction of the risk factors of interest with other determinants of the disease. Similar considerations apply to the application of results of clinical trials to individual patients, although in these studies sources of systematic error are better controlled.
Collapse
|
16
|
Fox MP, Lash TL. Quantitative bias analysis for study and grant planning. Ann Epidemiol 2020; 43:32-36. [PMID: 32113733 DOI: 10.1016/j.annepidem.2020.01.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 01/24/2020] [Accepted: 01/31/2020] [Indexed: 11/30/2022]
Abstract
PURPOSE Epidemiologists often think about the balance between study error and cost-efficiency in terms of study design and strategies to reduce random error. We less often consider cost-efficiencies in terms of dealing with systematic errors that arise within a study, such as in deciding how to measure study variables and misclassification implications. METHODS Given the information used to inform a study size calculation, the expected study data can be simulated during study planning, and the impact of anticipated biases can be estimated using quantitative bias analysis. This would allow investigators and stakeholders to identify areas where better data collection through more valid instruments is critical and where additional investment will not yield strong validity benefits. This could promote better use of study resources and help increase investigators' chances of funding by demonstrating they have thought through biases and have a plan for mitigating the impact. RESULTS We demonstrate how this would work with a practical example using the relationship between smoking during pregnancy as measured on birth certificates and incident breast cancer. CONCLUSIONS We show that although exposure sensitivity would likely be poor, spending more money to get a better smoking measure is unlikely to yield more valid estimates.
Collapse
Affiliation(s)
- Matthew P Fox
- Department of Epidemiology, Boston University School of Public Health, Boston, MA; Department of Global Health, Boston University School of Public Health, Boston, MA.
| | - Timothy L Lash
- Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA
| |
Collapse
|
17
|
Specogna AV, Sinicrope FA. Defining colon cancer biomarkers by using deep learning. Lancet 2020; 395:314-316. [PMID: 32007149 DOI: 10.1016/s0140-6736(20)30034-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Accepted: 01/02/2020] [Indexed: 11/27/2022]
Affiliation(s)
| | - Frank A Sinicrope
- Mayo Clinic & Mayo Comprehensive Cancer Center, Rochester, MN 55905, USA
| |
Collapse
|
18
|
Harper S. A Future for Observational Epidemiology: Clarity, Credibility, Transparency. Am J Epidemiol 2019; 188:840-845. [PMID: 30877294 DOI: 10.1093/aje/kwy280] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Revised: 12/17/2018] [Accepted: 12/18/2018] [Indexed: 12/12/2022] Open
Abstract
Observational studies are ambiguous, difficult, and necessary for epidemiology. Presently, there are concerns that the evidence produced by most observational studies in epidemiology is not credible and contributes to research waste. I argue that observational epidemiology could be improved by focusing greater attention on 1) defining questions that make clear whether the inferential goal is descriptive or causal; 2) greater utilization of quantitative bias analysis and alternative research designs that aim to decrease the strength of assumptions needed to estimate causal effects; and 3) promoting, experimenting with, and perhaps institutionalizing both reproducible research standards and replication studies to evaluate the fragility of study findings in epidemiology. Greater clarity, credibility, and transparency in observational epidemiology will help to provide reliable evidence that can serve as a basis for making decisions about clinical or population-health interventions.
Collapse
Affiliation(s)
- Sam Harper
- Department of Epidemiology, Biostatistics & Occupational Health, McGill University, Montreal, Quebec
- Institute for Health and Social Policy, McGill University, Montreal, Quebec
| |
Collapse
|
19
|
Abstract
Supplemental Digital Content is available in the text. Background: MOBI-Kids is a 14-country case–control study designed to investigate the potential effects of electromagnetic field exposure from mobile telecommunications devices on brain tumor risk in children and young adults conducted from 2010 to 2016. This work describes differences in cellular telephone use and personal characteristics among interviewed participants and refusers responding to a brief nonrespondent questionnaire. It also assesses the potential impact of nonparticipation selection bias on study findings. Methods: We compared nonrespondent questionnaires completed by 77 cases and 498 control refusers with responses from 683 interviewed cases and 1501 controls (suspected appendicitis patients) in six countries (France, Germany, Israel, Italy, Japan, and Spain). We derived selection bias factors and estimated inverse probability of selection weights for use in analysis of MOBI-Kids data. Results: The prevalence of ever-regular use was somewhat higher among interviewed participants than nonrespondent questionnaire respondents 10–14 years of age (68% vs. 62% controls, 63% vs. 48% cases); in those 20–24 years, the prevalence was ≥97%. Interviewed controls and cases in the 15- to 19- and 20- to 24-year-old age groups were more likely to have a time since start of use of 5+ years. Selection bias factors generally indicated a small underestimation in cellular telephone odds ratios (ORs) ranging from 0.96 to 0.97 for ever-regular use and 0.92 to 0.94 for time since start of use (5+ years), but varied in alternative hypothetical scenarios considered. Conclusions: Although limited by small numbers of nonrespondent questionnaire respondents, findings generally indicated a small underestimation in cellular telephone ORs due to selective nonparticipation.
Collapse
|
20
|
Yeung EH, Kim K, Purdue-Smithe A, Bell G, Zolton J, Ghassabian A, Vafai Y, Robinson SL, Mumford SL. Child Health: Is It Really Assisted Reproductive Technology that We Need to Be Concerned About? Semin Reprod Med 2019; 36:183-194. [PMID: 30866005 DOI: 10.1055/s-0038-1675778] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Concerns remain about the health of children conceived by infertility treatment. Studies to date have predominantly not identified substantial long-term health effects after accounting for plurality, which is reassuring given the increasing numbers of children conceived by infertility treatment worldwide. However, as technological advances in treatment arise, ongoing studies remain critical for monitoring health effects. To study whether the techniques used in infertility treatment cause health differences, however, remains challenging due to identification of an appropriate comparison group, heterogeneous treatment, and confounding by the underlying causes of infertility. In fact, the factors that are associated with underlying infertility, including parental obesity and other specific male and female factors, may be important independent factors to consider. This review will summarize key methodological considerations in studying children conceived by infertility treatment including the evidence of associations between underlying infertility factors and child health.
Collapse
Affiliation(s)
| | | | | | | | | | - Akhgar Ghassabian
- Department of Pediatrics, New York University School of Medicine, New York.,Department of Environmental Medicine, New York University School of Medicine, New York, New York.,Department of Population Health, New York University School of Medicine, New York, New York
| | | | | | | |
Collapse
|
21
|
Epidemiologic analyses with error-prone exposures: review of current practice and recommendations. Ann Epidemiol 2018; 28:821-828. [PMID: 30316629 DOI: 10.1016/j.annepidem.2018.09.001] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2018] [Revised: 07/21/2018] [Accepted: 09/05/2018] [Indexed: 02/02/2023]
Abstract
PURPOSE Variables in observational studies are commonly subject to measurement error, but the impact of such errors is frequently ignored. As part of the STRengthening Analytical Thinking for Observational Studies Initiative, a task group on measurement error and misclassification seeks to describe the current practice for acknowledging and addressing measurement error. METHODS Task group on measurement error and misclassification conducted a literature survey of four types of research studies that are typically impacted by exposure measurement error: (1) dietary intake cohort studies, (2) dietary intake population surveys, (3) physical activity cohort studies, and (4) air pollution cohort studies. RESULTS The survey revealed that while researchers were generally aware that measurement error affected their studies, very few adjusted their analysis for the error. Most articles provided incomplete discussion of the potential effects of measurement error on their results. Regression calibration was the most widely used method of adjustment. CONCLUSIONS Methods to correct for measurement error are available but require additional data regarding the error structure. There is a great need to incorporate such data collection within study designs and improve the analytical approach. Increased efforts by investigators, editors, and reviewers are needed to improve presentation of research when data are subject to error.
Collapse
|
22
|
Lash TL, Collin LJ, Van Dyke ME. The replication crisis in epidemiology: snowball, snow job, or winter solstice? CURR EPIDEMIOL REP 2018; 5:175-183. [PMID: 33907664 PMCID: PMC8075285 DOI: 10.1007/s40471-018-0148-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
PURPOSE OF REVIEW Like a snowball rolling down a steep hill, the most recent crisis over the perceived lack of reproducibility of scientific results has outpaced the evidence of crisis. It has led to new actions and new guidelines that have been rushed to market without plans for evaluation, metrics for success, or due consideration of the potential for unintended consequences. RECENT FINDINGS The perception of the crisis is at least partly a snow job, heavily influenced by a small number of centers lavishly funded by a single foundation, with undue and unsupported attention to preregistration as a solution to the perceived crisis. At the same time, the perception of crisis provides an opportunity for introspection. Two studies' estimates of association may differ because of undue attention on null hypothesis statistical testing, because of differences in the distribution of effect modifiers, because of differential susceptibility to threats to validity, or for other reasons. Perhaps the expectation of what reproducible epidemiology ought to look like is more misguided than the practice of epidemiology. We advocate for the idea of "replication and advancement." Studies should not only replicate earlier work, but also improve on it in by enhancing the design or analysis. SUMMARY Abandoning blind reliance on null hypothesis significance testing for statistical inference, finding consensus on when pre-registration of non-randomized study protocols has merit, and focusing on replication and advance are the most certain ways to emerge from this solstice for the better.
Collapse
Affiliation(s)
- Timothy L. Lash
- Department of Epidemiology, Rollins School of Public Health, Emory University
| | - Lindsay J. Collin
- Department of Epidemiology, Rollins School of Public Health, Emory University
| | - Miriam E. Van Dyke
- Department of Epidemiology, Rollins School of Public Health, Emory University
| |
Collapse
|
23
|
Hausmann L, Schweitzer B, Middleton FA, Schulz JB. Reviewer selection biases editorial decisions on manuscripts. J Neurochem 2018; 146:21-46. [PMID: 29377133 DOI: 10.1111/jnc.14314] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 12/27/2017] [Accepted: 01/19/2018] [Indexed: 12/15/2022]
Abstract
Many journals, including the Journal of Neurochemistry, enable authors to list peer reviewers as 'preferred' or 'opposed' suggestions to the editor. At the Journal of Neurochemistry, the handling editor (HE) may follow recommendations or select non-author-suggested reviewers (non-ASRs). We investigated whether selection of author-suggested reviewers (ASRs) influenced decisions on a paper, and whether differences might be related to a reviewer's, editor's or manuscript's geographical location. In this retrospective analysis, we compared original research articles submitted to the Journal of Neurochemistry from 2013 through 2016 that were either reviewed exclusively by non-ASRs, by at least one ASR, by at least one reviewer marked by the author as 'opposed' or none. Manuscript outcome, reviewer rating of manuscript quality, rating of the reviewers' performance by the editor (R-score), time to review, and the country of the editor, reviewers and manuscript author were analyzed using non-parametric rank-based comparisons, chi-square (χ2 ) analysis, multivariate linear regression, one-way analysis of variance, and inter-rater reliability determination. Original research articles that had been reviewed by at least one ASR stood a higher chance of being accepted (525/1006 = 52%) than papers that had been reviewed by non-ASRs only (579/1800 = 32%). An article was 2.4 times more likely to be accepted than rejected by an ASR compared to a non-ASR (Pearson's χ2 (1) = 181.3, p < 0.05). At decision, the editor did not simply follow the reviewers' recommendation but had a balancing role: Rates of recommendation from reviewers for rejection were 11.2% (139/1241) with ASRs versus 29.0% (1379/4755) with non-ASRs (this is a ratio of 0.39 where 1 means no difference between rejection rates for both groups), whereas the proportion of final decisions to reject was 24.7% (248/1006) versus 45.7% (822/1800) (a ratio of 0.54, considerably closer to 1). Recommendations by non-ASRs were more favorable for manuscripts from USA/Canada and Europe than for Asia/Pacific or Other countries. ASRs judged North American manuscripts most favorably, and judged papers generally more positively (mean: 2.54 on a 1-5 scale) than did non-ASRs (mean: 3.16) reviewers, whereas time for review (13.28 vs. 13.20 days) did not differ significantly between these groups. We also found that editors preferably assigned reviewers from their own geographical region, but there was no tendency for reviewers to judge papers from their own region more favorably. Our findings strongly confirm a bias toward lower rejection rates when ASRs assess a paper, which led to the decision to abandon the option to recommend reviewers at the Journal of Neurochemistry. Open Data: Materials are available on https://osf.io/jshg7/.
Collapse
Affiliation(s)
- Laura Hausmann
- Department of Neurology, University Hospital RWTH Aachen, Aachen, Germany
| | - Barbara Schweitzer
- Department of Neurology, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank A Middleton
- SUNY Upstate Medical University, Institute for Human Performance, Syracuse, New York, USA
| | - Jörg B Schulz
- Department of Neurology, University Hospital RWTH Aachen, Aachen, Germany
- Jülich Aachen Research Alliance (JARA) - JARA-Institute Molecular Neuroscience and Neuroimaging, FZ Jülich and RWTH Aachen University, Aachen, Germany
| |
Collapse
|
24
|
Lash TL. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing. Am J Epidemiol 2017; 186:627-635. [PMID: 28938715 DOI: 10.1093/aje/kwx261] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 12/22/2016] [Indexed: 01/09/2023] Open
Abstract
In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best.
Collapse
Affiliation(s)
- Timothy L Lash
- Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA
| |
Collapse
|