1
|
Van Eerdenbrugh S, Pingani L, Prevendar T, Lantta T, Zajac J, Prokop-Dorner A, Brandão MP, Poklepović Peričić T, van Hoof J, Lund H, Bała MM. Cross-sectional exploratory survey among health researchers in Europe on the awareness of and barriers affecting the use of an evidence-based research approach. BMJ Open 2024; 14:e083676. [PMID: 39414297 PMCID: PMC11487815 DOI: 10.1136/bmjopen-2023-083676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 09/26/2024] [Indexed: 10/18/2024] Open
Abstract
OBJECTIVES This exploratory study was conducted to find out how well the concept of evidence-based research (EBR) is known among European health researchers with substantial clinical research experience, and which barriers affect the use of an EBR approach. The concept of EBR implies that researchers use evidence synthesis to justify new studies and to inform their design. DESIGN A cross-sectional exploratory survey study. SETTING AND PARTICIPANTS The survey was conducted among European health researchers. Respondents included 205 health researchers (physicians, nurses, dentists, allied health researchers and members of other professions involved in health research) with a doctoral degree or at least 5 years of research experience. PRIMARY AND SECONDARY OUTCOME MEASURES The primary outcome measures were the level of awareness of the concept of EBR and the presence of barriers affecting the use of an EBR approach. Secondary outcome measures include correlations between sociodemographic characteristics (eg, profession) and awareness of EBR. RESULTS We discovered that 84.4% of the respondents initially indicated their awareness of the concept of EBR. Nevertheless, 22.5% of them concluded that, on reading the definition, they either do not know or do not fully comprehend the concept of EBR. The main barriers affecting the use of an EBR approach were related to organisational issues, such as not being attributed resources (30.5% of the respondents), time (24.8%) or access to implement it (14.9%). CONCLUSIONS Despite the limitations, this study clearly shows that ongoing initiatives are necessary to raise awareness about the importance of implementing the EBR approach in health research. This paper contributes to a discussion of the issues that obstruct the implementation of the EBR approach and potential solutions to overcome these issues, such as improving the knowledge and skills necessary to practice the EBR approach.
Collapse
Affiliation(s)
- Sabine Van Eerdenbrugh
- Department of Speech-Language Pathology, Thomas More University of Applied Sciences, Antwerp, Belgium
| | - Luca Pingani
- Department of Biomedical, Metabolic and Neural Sciences, Università degli Studi di Modena e Reggio Emilia, Modena, Italy
- Dipartimento ad Attività Integrata Salute Mentale e Dipendenze Patologiche; Direzione delle Professioni Sanitarie, Azienda USL - IRCCS di Reggio Emilia, Reggio Emilia, Italy
| | - Tamara Prevendar
- Faculty of Psychotherapy Science, Sigmund Freud University Vienna, Vienna, Austria
- Faculty of Psychology, Sigmund Freud University Vienna - Ljubljana Branch, Ljubljana, Slovenia
| | - Tella Lantta
- Faculty of Medicine, Department of Nursing Science, University of Turku, Turku, Finland
- Faculty of Health, Arts and Design, Centre for Forensic Behavioural Sciences, Swinburne University of Technology, Melbourne, Victoria, Australia
| | - Joanna Zajac
- Department of Hygiene and Dietetics, Chair of Epidemiology and Preventive Medicine, Jagiellonian University Medical College, Krakow, Poland
| | - Anna Prokop-Dorner
- Department of Medical Sociology, Chair of Epidemiology and Preventive Medicine, Jagiellonian University Medical College, Krakow, Poland
| | | | - Tina Poklepović Peričić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
| | - Joost van Hoof
- Faculty of Social Work & Education, Research Group of Urban Ageing, The Hague University of Applied Sciences, Den Haag, The Netherlands
- Department of Systems Research, Faculty of Spatial Management and Landscape Architecture, Wrocław University of Environmental and Life Sciences, Wroclaw, Poland
| | - Hans Lund
- Section Evidence-Based Practice, Western Norway University of Applied Sciences, Bergen, Norway
| | - Małgorzata M Bała
- Department of Hygiene and Dietetics, Chair of Epidemiology and Preventive Medicine, Jagiellonian University Medical College, Krakow, Poland
| |
Collapse
|
2
|
Röver C, Friede T. Using the bayesmeta R package for Bayesian random-effects meta-regression. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107303. [PMID: 36566650 DOI: 10.1016/j.cmpb.2022.107303] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 11/25/2022] [Accepted: 12/07/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Random-effects meta-analysis within a hierarchical normal modeling framework is commonly implemented in a wide range of evidence synthesis applications. More general problems may even be tackled when considering meta-regression approaches that in addition allow for the inclusion of study-level covariables. METHODS We describe the Bayesian meta-regression implementation provided in the bayesmetaR package including the choice of priors, and we illustrate its practical use. RESULTS A wide range of example applications are given, such as binary and continuous covariables, subgroup analysis, indirect comparisons, and model selection. Example R code is provided. CONCLUSIONS The bayesmeta package provides a flexible implementation. Due to the avoidance of MCMC methods, computations are fast and reproducible, facilitating quick sensitivity checks or large-scale simulation studies.
Collapse
Affiliation(s)
- Christian Röver
- Department of Medical Statistics, University Medical Center Göttingen, Humboldtallee 32, 37073 Göttingen, Germany.
| | - Tim Friede
- Department of Medical Statistics, University Medical Center Göttingen, Humboldtallee 32, 37073 Göttingen, Germany
| |
Collapse
|
3
|
Andreasen J, Nørgaard B, Draborg E, Juhl CB, Yost J, Brunnhuber K, Robinson KA, Lund H. Justification of research using systematic reviews continues to be inconsistent in clinical health science-A systematic review and meta-analysis of meta-research studies. PLoS One 2022; 17:e0276955. [PMID: 36315526 PMCID: PMC9621455 DOI: 10.1371/journal.pone.0276955] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 10/18/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Redundancy is an unethical, unscientific, and costly challenge in clinical health research. There is a high risk of redundancy when existing evidence is not used to justify the research question when a new study is initiated. Therefore, the aim of this study was to synthesize meta-research studies evaluating if and how authors of clinical health research studies use systematic reviews when initiating a new study. METHODS Seven electronic bibliographic databases were searched (final search June 2021). Meta-research studies assessing the use of systematic reviews when justifying new clinical health studies were included. Screening and data extraction were performed by two reviewers independently. The primary outcome was defined as the percentage of original studies within the included meta-research studies using systematic reviews of previous studies to justify a new study. Results were synthesized narratively and quantitatively using a random-effects meta-analysis. The protocol has been registered in Open Science Framework (https://osf.io/nw7ch/). RESULTS Twenty-one meta-research studies were included, representing 3,621 original studies or protocols. Nineteen of the 21 studies were included in the meta-analysis. The included studies represented different disciplines and exhibited wide variability both in how the use of previous systematic reviews was assessed, and in how this was reported. The use of systematic reviews to justify new studies varied from 16% to 87%. The mean percentage of original studies using systematic reviews to justify their study was 42% (95% CI: 36% to 48%). CONCLUSION Justification of new studies in clinical health research using systematic reviews is highly variable, and fewer than half of new clinical studies in health science were justified using a systematic review. Research redundancy is a challenge for clinical health researchers, as well as for funders, ethics committees, and journals.
Collapse
Affiliation(s)
- Jane Andreasen
- Department of Physiotherapy and Occupational Therapy, Aalborg University Hospital, Denmark and Public Health and Epidemiology Group, Department of Health, Science and Technology, Aalborg University, Aalborg, Denmark
- * E-mail:
| | - Birgitte Nørgaard
- Department of Public Health, University of Southern Denmark Odense, Denmark
| | - Eva Draborg
- Department of Public Health, University of Southern Denmark Odense, Denmark
| | - Carsten Bogh Juhl
- Department of Sports Science and Clinical Biomechanics, University of Southern Denmark and Department of Physiotherapy and Occupational Therapy, Copenhagen University Hospital, Herlev and Gentofte, Herlev, Denmark
| | - Jennifer Yost
- M. Louise Fitzpatrick College of Nursing, Villanova University, Villanova, PA, United States of America
| | | | - Karen A. Robinson
- Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| | - Hans Lund
- Department of Evidence-Based Practice, Western Norway University of Applied Sciences, Bergen, Norway
| |
Collapse
|
4
|
Draborg E, Andreasen J, Nørgaard B, Juhl CB, Yost J, Brunnhuber K, Robinson KA, Lund H. Systematic reviews are rarely used to contextualise new results-a systematic review and meta-analysis of meta-research studies. Syst Rev 2022; 11:189. [PMID: 36064741 PMCID: PMC9446778 DOI: 10.1186/s13643-022-02062-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 08/23/2022] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Results of new studies should be interpreted in the context of what is already known to compare results and build the state of the science. This systematic review and meta-analysis aimed to identify and synthesise results from meta-research studies examining if original studies within health use systematic reviews to place their results in the context of earlier, similar studies. METHODS We searched MEDLINE (OVID), EMBASE (OVID), and the Cochrane Methodology Register for meta-research studies reporting the use of systematic reviews to place results of original clinical studies in the context of existing studies. The primary outcome was the percentage of original studies included in the meta-research studies using systematic reviews or meta-analyses placing new results in the context of existing studies. Two reviewers independently performed screening and data extraction. Data were synthesised using narrative synthesis and a random-effects meta-analysis was performed to estimate the mean proportion of original studies placing their results in the context of earlier studies. The protocol was registered in Open Science Framework. RESULTS We included 15 meta-research studies, representing 1724 original studies. The mean percentage of original studies within these meta-research studies placing their results in the context of existing studies was 30.7% (95% CI [23.8%, 37.6%], I2=87.4%). Only one of the meta-research studies integrated results in a meta-analysis, while four integrated their results within a systematic review; the remaining cited or referred to a systematic review. The results of this systematic review are characterised by a high degree of heterogeneity and should be interpreted cautiously. CONCLUSION Our systematic review demonstrates a low rate of and great variability in using systematic reviews to place new results in the context of existing studies. On average, one third of the original studies contextualised their results. Improvement is still needed in researchers' use of prior research systematically and transparently-also known as the use of an evidence-based research approach, to contribute to the accumulation of new evidence on which future studies should be based. SYSTEMATIC REVIEW REGISTRATION Open Science registration number https://osf.io/8gkzu/.
Collapse
Affiliation(s)
- Eva Draborg
- Department of Public Health, University of Southern Denmark, Odense, Denmark
| | - Jane Andreasen
- Department of Physiotherapy and Occupational Therapy, Aalborg University Hospital, Denmark and Public Health and Epidemiology Group, Department of Health, Science and Technology, Aalborg University, Denmark, Aalborg, Denmark
| | - Birgitte Nørgaard
- Department of Public Health, University of Southern Denmark, Odense, Denmark
| | - Carsten Bogh Juhl
- Department of Sports Science and Clinical Biomechanics, University of Southern Denmark and Department of Physiotherapy and Occupational Therapy, Copenhagen University Hospital, Herlev and Gentofte, Denmark
| | - Jennifer Yost
- M. Louise Fitzpatrick College of Nursing, Villanova University, Villanova, USA
| | | | | | - Hans Lund
- Section of Evidence-Based Practice, Western Norway University of Applied Sciences, Bergen, Norway
| |
Collapse
|
5
|
Nørgaard B, Briel M, Chrysostomou S, Ristic Medic D, Buttigieg SC, Kiisk E, Puljak L, Bala M, Pericic TP, Lesniak W, Zając J, Lund H, Pieper D. A systematic review of meta-research studies finds substantial methodological heterogeneity in citation analyses to monitor evidence-based research. J Clin Epidemiol 2022; 150:126-141. [DOI: 10.1016/j.jclinepi.2022.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 06/21/2022] [Accepted: 06/29/2022] [Indexed: 10/17/2022]
|
6
|
Röver C, Ursino M, Friede T, Zohar S. A straightforward meta-analysis approach for oncology phase I dose-finding studies. Stat Med 2022; 41:3915-3940. [PMID: 35661205 DOI: 10.1002/sim.9484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/12/2022] [Accepted: 05/16/2022] [Indexed: 11/09/2022]
Abstract
Phase I early-phase clinical studies aim at investigating the safety and the underlying dose-toxicity relationship of a drug or combination. While little may still be known about the compound's properties, it is crucial to consider quantitative information available from any studies that may have been conducted previously on the same drug. A meta-analytic approach has the advantages of being able to properly account for between-study heterogeneity, and it may be readily extended to prediction or shrinkage applications. Here we propose a simple and robust two-stage approach for the estimation of maximum tolerated dose(s) utilizing penalized logistic regression and Bayesian random-effects meta-analysis methodology. Implementation is facilitated using standard R packages. The properties of the proposed methods are investigated in Monte Carlo simulations. The investigations are motivated and illustrated by two examples from oncology.
Collapse
Affiliation(s)
- Christian Röver
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Moreno Ursino
- Unit of Clinical Epidemiology, AP-HP, CHU Robert Debré, Université Paris Cité, Inserm CIC-EC 1426, Paris, France.,Inserm, Centre de Recherche des Cordeliers, Université Paris Cité, Sorbonne Université, Paris, France.,HeKA, Inria Paris, Paris, France
| | - Tim Friede
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Sarah Zohar
- Inserm, Centre de Recherche des Cordeliers, Université Paris Cité, Sorbonne Université, Paris, France.,HeKA, Inria Paris, Paris, France
| |
Collapse
|
7
|
ter Schure J, Grünwald P. ALL-IN meta-analysis: breathing life into living systematic reviews. F1000Res 2022; 11:549. [PMID: 36313543 PMCID: PMC9587381 DOI: 10.12688/f1000research.74223.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/26/2022] [Indexed: 11/20/2022] Open
Abstract
Science is justly admired as a cumulative process ("standing on the shoulders of giants"), yet scientific knowledge is typically built on a patchwork of research contributions without much coordination. This lack of efficiency has specifically been addressed in clinical research by recommendations for living systematic reviews and against research waste. We propose to further those recommendations with ALL-IN meta-analysis: Anytime Live and Leading INterim meta-analysis. ALL-IN provides statistical methodology for a meta-analysis that can be updated at any time-reanalyzing after each new observation while retaining type-I error guarantees, live-no need to prespecify the looks, and leading-in the decisions on whether individual studies should be initiated, stopped or expanded, the meta-analysis can be the leading source of information. We illustrate the method for time-to-event data, showing how synthesizing data at interim stages of studies can increase efficiency when studies are slow in themselves to provide the necessary number of events for completion. The meta-analysis can be performed on interim data, but does not have to. The analysis design requires no information about the number of patients in trials or the number of trials eventually included. So it can breathe life into living systematic reviews, through better and simpler statistics, efficiency, collaboration and communication.
Collapse
Affiliation(s)
| | - Peter Grünwald
- Machine Learning, CWI, Amsterdam, The Netherlands
- Mathematics, Leiden University, Leiden, The Netherlands
| |
Collapse
|
8
|
Sheng J, Feldhake E, Zarin DA, Kimmelman J. Completeness of clinical evidence citation in trial protocols: A cross-sectional analysis. MED 2022; 3:335-343.e6. [PMID: 35584654 DOI: 10.1016/j.medj.2022.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 01/31/2022] [Accepted: 03/10/2022] [Indexed: 02/02/2023]
Abstract
BACKGROUND Human protection policies require assessment of how proposed clinical trials relate to prior and ongoing studies testing similar hypotheses. We assessed the extent to which clinical trial protocols cited relevant published and ongoing clinical trials that would have been easily accessible with reference searches. METHODS We created a random sample of trial protocols using ClinicalTrials.gov, stratifying by industry and non-industry-sponsored studies. We then conducted reference searches to determine the extent to which protocols cited clinical trials with identical intervention-indication pairings that were accessible in PubMed and ClinicalTrials.gov at the time of trial initiation. FINDINGS Of the 101 trial protocols evaluated, 73 had at least one identified citable trial. None contained statements suggesting a systematic search for relevant clinical evidence. Of industry-sponsored trial protocols with at least one identified citable trial, 7 of 23 (30.4%) did not cite any published clinical trials and 10 of 33 (30.3%) did not cite any ongoing relevant trials. Of the non-industry-sponsored trial protocols with at least one identified citable trial, 5 of 28 (17.9%) did not cite any published clinical trials and 14 of 19 (73.7%) did not cite any ongoing trials. CONCLUSIONS Clinical trial protocols undercite accessible, relevant trials and do not document systematic searches for relevant clinical trials. Consequently, ethics review committees often receive an incomplete picture of the research landscape if they review protocols similar to those deposited on ClinicalTrials.gov. FUNDING This study was funded by the Canadian Institutes of Health Research and the Greenwall Foundation.
Collapse
Affiliation(s)
- Jacky Sheng
- Department of Equity, Ethics and Policy, McGill University, 2001 McGill College, 11th Floor, Montreal, QC H3A 1G1, Canada
| | - Emma Feldhake
- Department of Equity, Ethics and Policy, McGill University, 2001 McGill College, 11th Floor, Montreal, QC H3A 1G1, Canada
| | - Deborah A Zarin
- Multi-Regional Clinical Trials Center, Brigham and Women's Hospital and Harvard, Cambridge, MA 02115, USA
| | - Jonathan Kimmelman
- Department of Equity, Ethics and Policy, McGill University, 2001 McGill College, 11th Floor, Montreal, QC H3A 1G1, Canada.
| |
Collapse
|
9
|
Aschmann HE, McNeil JJ, Puhan MA. Rejoinder to Dr Vickers. Clin Trials 2022; 19:229-230. [PMID: 35152804 DOI: 10.1177/17407745211068548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Hélène E Aschmann
- Epidemiology, Biostatistics and Prevention Institute, Department of Epidemiology, University of Zurich, Zurich, Switzerland.,Department of Epidemiology and Biostatistics, University of California, San Francisco, San Francisco, USA
| | - John J McNeil
- School of Public Health and Preventive Medicine, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia
| | - Milo A Puhan
- Epidemiology, Biostatistics and Prevention Institute, Department of Epidemiology, University of Zurich, Zurich, Switzerland.,Department of Epidemiology, School of Public Health, Johns Hopkins Bloomberg, Baltimore, MD, USA
| |
Collapse
|
10
|
Siontis GC, Nikolakopoulou A, Sweda R, Mavridis D, Salanti G. Estimating the sample size of sham-controlled randomized controlled trials using existing evidence. F1000Res 2022; 11:85. [PMID: 36451658 PMCID: PMC9669514 DOI: 10.12688/f1000research.108554.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/03/2022] [Indexed: 09/08/2024] Open
Abstract
Background: In randomized controlled trials (RCTs), the power is often 'reverse engineered' based on the number of participants that can realistically be achieved. An attractive alternative is planning a new trial conditional on the available evidence; a design of particular interest in RCTs that use a sham control arm (sham-RCTs). Methods: We explore the design of sham-RCTs, the role of sequential meta-analysis and conditional planning in a systematic review of renal sympathetic denervation for patients with arterial hypertension. The main efficacy endpoint was mean change in 24-hour systolic blood pressure. We performed sequential meta-analysis to identify the time point where the null hypothesis would be rejected in a prospective scenario. Evidence-based conditional sample size calculations were performed based on fixed-effect meta-analysis. Results: In total, six sham-RCTs (981 participants) were identified. The first RCT was considerably larger (535 participants) than those subsequently published (median sample size of 80). All trial sample sizes were calculated assuming an unrealistically large intervention effect which resulted in low power when each study is considered as a stand-alone experiment. Sequential meta-analysis provided firm evidence against the null hypothesis with the synthesis of the first four trials (755 patients, cumulative mean difference -2.75 (95%CI -4.93 to -0.58) favoring the active intervention)). Conditional planning resulted in much larger sample sizes compared to those in the original trials, due to overoptimistic expected effects made by the investigators in individual trials, and potentially a time-effect association. Conclusions: Sequential meta-analysis of sham-RCTs can reach conclusive findings earlier and hence avoid exposing patients to sham-related risks. Conditional planning of new sham-RCTs poses important challenges as many surgical/minimally invasive procedures improve over time, the intervention effect is expected to increase in new studies and this violates the underlying assumptions. Unless this is accounted for, conditional planning will not improve the design of sham-RCTs.
Collapse
Affiliation(s)
| | | | - Romy Sweda
- Department of Cardiology, University Hospital of Bern, Bern, Switzerland
| | - Dimitris Mavridis
- Department of Primary Education, University of Ioannina, Ioannina, Greece
| | - Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
11
|
Siontis GC, Nikolakopoulou A, Sweda R, Mavridis D, Salanti G. Estimating the sample size of sham-controlled randomized controlled trials using existing evidence. F1000Res 2022; 11:85. [PMID: 36451658 PMCID: PMC9669514 DOI: 10.12688/f1000research.108554.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/18/2022] [Indexed: 09/08/2024] Open
Abstract
Background: In randomized controlled trials (RCTs), the power is often 'reverse engineered' based on the number of participants that can realistically be achieved. An attractive alternative is planning a new trial conditional on the available evidence; a design of particular interest in RCTs that use a sham control arm (sham-RCTs). Methods: We explore the design of sham-RCTs, the role of sequential meta-analysis and conditional planning in a systematic review of renal sympathetic denervation for patients with arterial hypertension. The main efficacy endpoint was mean change in 24-hour systolic blood pressure. We performed sequential meta-analysis to identify the time point where the null hypothesis would be rejected in a prospective scenario. Evidence-based conditional sample size calculations were performed based on fixed-effect meta-analysis. Results: In total, six sham-RCTs (981 participants) were identified. The first RCT was considerably larger (535 participants) than those subsequently published (median sample size of 80). All trial sample sizes were calculated assuming an unrealistically large intervention effect which resulted in low power when each study is considered as a stand-alone experiment. Sequential meta-analysis provided firm evidence against the null hypothesis with the synthesis of the first four trials (755 patients, cumulative mean difference -2.75 (95%CI -4.93 to -0.58) favoring the active intervention)). Conditional planning resulted in much larger sample sizes compared to those in the original trials, due to overoptimistic expected effects made by the investigators in individual trials, and potentially a time-effect association. Conclusions: Sequential meta-analysis of sham-RCTs can reach conclusive findings earlier and hence avoid exposing patients to sham-related risks. Conditional planning of new sham-RCTs poses important challenges as many surgical/minimally invasive procedures improve over time, the intervention effect is expected to increase in new studies and this violates the underlying assumptions. Unless this is accounted for, conditional planning will not improve the design of sham-RCTs.
Collapse
Affiliation(s)
| | | | - Romy Sweda
- Department of Cardiology, University Hospital of Bern, Bern, Switzerland
| | - Dimitris Mavridis
- Department of Primary Education, University of Ioannina, Ioannina, Greece
| | - Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
12
|
Nørgaard B, Draborg E, Andreasen J, Juhl CB, Yost J, Brunnhuber K, Robinson KA, Lund H. Systematic Reviews are Rarely Used to Inform Study Design - a Systematic Review and Meta-analysis. J Clin Epidemiol 2022; 145:1-13. [PMID: 35045317 DOI: 10.1016/j.jclinepi.2022.01.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 12/28/2021] [Accepted: 01/13/2022] [Indexed: 12/30/2022]
Abstract
OBJECTIVE Our aim was to identify and synthesize the results from meta-research studies to determine whether and how authors of original studies in clinical health research use systematic reviews when designing new studies. STUDY DESIGN AND SETTING For this systematic review, we searched MEDLINE (OVID), Embase (OVID) and the Cochrane Methodology Register. We included meta-research studies and primary outcome was the percentage of original studies using systematic reviews to design their study. Risk of bias was assessed using an ad hoc created list of ten items. The results are presented both as a narrative synthesis and a meta-analysis. RESULTS Sixteen studies were included. The use of a systematic review to inform the design of new clinical studies varied between 0% and 73%, with a mean percentage of 17%. The number of components of the design in which information from previous systematic reviews was used varied from three to eleven. CONCLUSION Clinical health research is characterized by variability regarding the extent to which systematic reviews are used to guide the design. An evidence-based research (EBR) approach towards research design when new clinical health studies are designed is necessary to decrease potential research redundancy and increase end-user value.
Collapse
Affiliation(s)
- Birgitte Nørgaard
- Department of Public Health, University of Southern Denmark, Odense, Denmark.
| | - Eva Draborg
- Department of Public Health, University of Southern Denmark, Odense, Denmark
| | - Jane Andreasen
- Department of Physiotherapy and Occupational Therapy, Aalborg University Hospital, Denmark and Public Health and Epidemiology Group, Department of Health, Science and Technology, Aalborg University, Aalborg, Denmark
| | - Carsten Bogh Juhl
- Department of Sports Science and Clinical Biomechanics, University of Southern Denmark and Department of Physiotherapy and Occupational Therapy, University of Copenhagen Herlev and Gentofte, Denmark
| | - Jennifer Yost
- M. Louise Fitzpatrick College of Nursing, Villanova University, Philadelphia, Pennsylvania, USA
| | | | | | - Hans Lund
- Department of Evidence-Based Practice, Western Norway University of Applied Sciences, Bergen, Norway
| |
Collapse
|
13
|
Gosling CJ, Cartigny A, Mellier BC, Solanes A, Radua J, Delorme R. Efficacy of psychosocial interventions for Autism spectrum disorder: an umbrella review. Mol Psychiatry 2022; 27:3647-3656. [PMID: 35790873 PMCID: PMC9708596 DOI: 10.1038/s41380-022-01670-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 05/31/2022] [Accepted: 06/09/2022] [Indexed: 02/08/2023]
Abstract
INTRODUCTION The wide range of psychosocial interventions designed to assist people with Autism Spectrum Disorder (ASD) makes it challenging to compile and hierarchize the scientific evidence that supports the efficacy of these interventions. Thus, we performed an umbrella review of published meta-analyses of controlled clinical trials that investigated the efficacy of psychosocial interventions on both core and related ASD symptoms. METHODS Each meta-analysis that was identified was re-estimated using a random-effects model with a restricted maximum likelihood estimator. The methodological quality of included meta-analyses was critically appraised and the credibility of the evidence was assessed algorithmically according to criteria adapted for the purpose of this study. RESULTS We identified a total of 128 meta-analyses derived from 44 reports. More than half of the non-overlapping meta-analyses were nominally statistically significant and/or displayed a moderate-to-large pooled effect size that favored the psychosocial interventions. The assessment of the credibility of evidence pointed out that the efficacy of early intensive behavioral interventions, developmental interventions, naturalistic developmental behavioral interventions, and parent-mediated interventions was supported by suggestive evidence on at least one outcome in preschool children. Possible outcomes included social communication deficits, global cognitive abilities, and adaptive behaviors. Results also revealed highly suggestive indications that parent-mediated interventions improved disruptive behaviors in early school-aged children. The efficacy of social skills groups was supported by suggestive evidence for improving social communication deficits and overall ASD symptoms in school-aged children and adolescents. Only four meta-analyses had a statistically significant pooled effect size in a sensitivity analysis restricted to randomized controlled trials at low risk of detection bias. DISCUSSION This umbrella review confirmed that several psychosocial interventions show promise for improving symptoms related to ASD at different stages of life. However, additional well-designed randomized controlled trials are still required to produce a clearer picture of the efficacy of these interventions. To facilitate the dissemination of scientific knowledge about psychosocial interventions for individuals with ASD, we built an open-access and interactive website that shares the information collected and the results generated during this umbrella review. PRE-REGISTRATION PROSPERO ID CRD42020212630.
Collapse
Affiliation(s)
- Corentin J. Gosling
- Paris Nanterre University, DysCo Laboratory, F-92000 Nanterre, France ,grid.508487.60000 0004 7885 7602Université de Paris, Laboratoire de Psychopathologie et Processus de Santé, F-92100 Boulogne-Billancourt, France ,grid.5491.90000 0004 1936 9297Centre for Innovation in Mental Health (CIMH), School of Psychology, Faculty of Environmental and Life Sciences, University of Southampton, Southampton, UK
| | - Ariane Cartigny
- grid.508487.60000 0004 7885 7602Université de Paris, Laboratoire de Psychopathologie et Processus de Santé, F-92100 Boulogne-Billancourt, France ,grid.413235.20000 0004 1937 0589Department of Child and Adolescent Psychiatry, Robert Debré Hospital, APHP, Paris, France
| | | | - Aleix Solanes
- grid.10403.360000000091771775Imaging of Mood- and Anxiety-Related Disorders (IMARD) Group, Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), CIBERSAM, Barcelona, Spain
| | - Joaquim Radua
- grid.10403.360000000091771775Imaging of Mood- and Anxiety-Related Disorders (IMARD) Group, Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), CIBERSAM, Barcelona, Spain ,grid.13097.3c0000 0001 2322 6764Department of Psychosis Studies, Institute of Psychiatry, Psychology, and Neuroscience, King’s College London, London, UK ,grid.4714.60000 0004 1937 0626Department of Clinical Neuroscience, Centre for Psychiatric Research and Education, Karolinska Institutet, Stockholm, Sweden
| | - Richard Delorme
- grid.413235.20000 0004 1937 0589Department of Child and Adolescent Psychiatry, Robert Debré Hospital, APHP, Paris, France ,grid.428999.70000 0001 2353 6535Human Genetics and Cognitive Functions, Institut Pasteur, Paris, France
| |
Collapse
|
14
|
Clayton GL, Elliott D, Higgins JPT, Jones HE. Use of external evidence for design and Bayesian analysis of clinical trials: a qualitative study of trialists' views. Trials 2021; 22:789. [PMID: 34749778 PMCID: PMC8577005 DOI: 10.1186/s13063-021-05759-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 10/25/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Evidence from previous studies is often used relatively informally in the design of clinical trials: for example, a systematic review to indicate whether a gap in the current evidence base justifies a new trial. External evidence can be used more formally in both trial design and analysis, by explicitly incorporating a synthesis of it in a Bayesian framework. However, it is unclear how common this is in practice or the extent to which it is considered controversial. In this qualitative study, we explored attitudes towards, and experiences of, trialists in incorporating synthesised external evidence through the Bayesian design or analysis of a trial. METHODS Semi-structured interviews were conducted with 16 trialists: 13 statisticians and three clinicians. Participants were recruited across several universities and trials units in the United Kingdom using snowball and purposeful sampling. Data were analysed using thematic analysis and techniques of constant comparison. RESULTS Trialists used existing evidence in many ways in trial design, for example, to justify a gap in the evidence base and inform parameters in sample size calculations. However, no one in our sample reported using such evidence in a Bayesian framework. Participants tended to equate Bayesian analysis with the incorporation of prior information on the intervention effect and were less aware of the potential to incorporate data on other parameters. When introduced to the concepts, many trialists felt they could be making more use of existing data to inform the design and analysis of a trial in particular scenarios. For example, some felt existing data could be used more formally to inform background adverse event rates, rather than relying on clinical opinion as to whether there are potential safety concerns. However, several barriers to implementing these methods in practice were identified, including concerns about the relevance of external data, acceptability of Bayesian methods, lack of confidence in Bayesian methods and software, and practical issues, such as difficulties accessing relevant data. CONCLUSIONS Despite trialists recognising that more formal use of external evidence could be advantageous over current approaches in some areas and useful as sensitivity analyses, there are still barriers to such use in practice.
Collapse
Affiliation(s)
- Gemma L Clayton
- Department of Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK.
| | - Daisy Elliott
- Department of Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- Bristol Centre for Surgical Research, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - Julian P T Higgins
- Department of Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- NIHR Applied Research Collaboration West (ARC West) at University Hospitals Bristol and Weston NHS Foundation Trust, Bristol, UK
| | - Hayley E Jones
- Department of Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| |
Collapse
|
15
|
McLennan S, Nussbaumer-Streit B, Hemkens LG, Briel M. Barriers and Facilitating Factors for Conducting Systematic Evidence Assessments in Academic Clinical Trials. JAMA Netw Open 2021; 4:e2136577. [PMID: 34846522 PMCID: PMC8634056 DOI: 10.1001/jamanetworkopen.2021.36577] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
IMPORTANCE A systematic assessment of existing research should justify the conduct and inform the design of new clinical research but is often lacking. There is little research on the barriers to and factors facilitating systematic evidence assessments. OBJECTIVE To examine the practices and attitudes of Swiss stakeholders and international funders regarding conducting systematic evidence assessments in academic clinical trials. DESIGN, SETTING, AND PARTICIPANTS In this qualitative study, individual semistructured qualitative interviews were conducted between February and August 2020 with 48 Swiss stakeholder groups (27 primary investigators, 9 funders and sponsors, 6 clinical trial support organizations, and 6 ethics committee members) and between January and March 2021 with 9 international funders of clinical trials from North America and Europe with a reputation for requiring systematic evidence synthesis in applications for academic clinical trials. MAIN OUTCOMES AND MEASURES The main outcomes were practices and attitudes of Swiss stakeholders and international funders regarding conducting systematic evidence assessments in academic clinical trials. Interviews were analyzed using conventional content analysis. RESULTS Of the 57 participants, 40 (70.2%) were male. Participants universally acknowledged that a comprehensive understanding of the previous evidence is important but reported wide variation regarding how this should be achieved. Participants reported that the conduct of formal systematic reviews was currently not expected before most clinical trials, but most international funders reported expecting a systematic search for the existing evidence. Whereas time and resources were reported by all participants as barriers to conducting systematic reviews, the Swiss research ecosystem was reported not to be as supportive of a systematic approach compared with international settings. CONCLUSIONS AND RELEVANCE In this qualitative study, Swiss stakeholders and international funders generally agreed that new clinical trials should be justified by a systematic evidence assessment but that barriers on individual, organizational, and political levels kept them from implementing it. More explicit requirements from funders appear to be needed to clarify the required level of comprehensiveness in summarizing existing evidence for different types of clinical trials.
Collapse
Affiliation(s)
- Stuart McLennan
- Department of Clinical Research, Basel Institute for Clinical Epidemiology and Biostatistics, University of Basel and University Hospital Basel, Basel, Switzerland
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| | - Barbara Nussbaumer-Streit
- Cochrane Austria, Department for Evidence-based Medicine and Evaluation, Danube University Krems, Krems, Austria
| | - Lars G. Hemkens
- Department of Clinical Research, Basel Institute for Clinical Epidemiology and Biostatistics, University of Basel and University Hospital Basel, Basel, Switzerland
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California
- Meta-Research Innovation Center Berlin, Berlin Institute of Health, Berlin, Germany
| | - Matthias Briel
- Department of Clinical Research, Basel Institute for Clinical Epidemiology and Biostatistics, University of Basel and University Hospital Basel, Basel, Switzerland
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
16
|
Vellinga A, Lambe K, O'Connor P, O'Dea A. What discontinued trials teach us about trial registration? BMC Res Notes 2021; 14:47. [PMID: 33546746 PMCID: PMC7863519 DOI: 10.1186/s13104-020-05391-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 11/13/2020] [Indexed: 02/08/2023] Open
Abstract
OBJECTIVE Trial registries were set up to improve transparency, remove duplication, improve awareness and avoid waste. Many trials never reach the point of patient enrolment due to a myriad of reasons. The aim of this study was to investigate the reasons for and characteristics of discontinuation of trials. RESULTS A total of 163 discontinued trials were identified and compared to completed trials. A Survey was designed to further explore the nature and conduct of the trial. No differences in registered and categorised information was observed between discontinued and completed trials. Most trials discontinue due to patient or participant recruitment issues, often related to funding. Substantial changes to procedures or the protocol or changes to recruitment strategy were also commonly cited reasons. Survey information was available for 21 discontinued and 28 completed trials and no obvious differences could be identified. Our findings highlight the underlying problem of lack of detail, suboptimal recording, dated information and incomplete reporting of trials within a trial registry which hampers sharing and learning. To date, important progress has been made by the implementation of standards and the requirement of trials to be registered. Our review identifies areas where further improvements can be made.
Collapse
Affiliation(s)
- Akke Vellinga
- School of Medicine, National University of Ireland, Galway, Ireland. .,Primary Care Clinical Trials Network Ireland, National University of Ireland, Galway, Ireland. .,Irish Centre for Applied Patient Safety and Simulation (ICAPSS), School of Medicine, National University of Ireland, Galway, Ireland.
| | - Kathryn Lambe
- Irish Centre for Applied Patient Safety and Simulation (ICAPSS), School of Medicine, National University of Ireland, Galway, Ireland
| | - Paul O'Connor
- Irish Centre for Applied Patient Safety and Simulation (ICAPSS), School of Medicine, National University of Ireland, Galway, Ireland
| | - Angela O'Dea
- Royal College of Surgeons in Ireland, 121/122 St. Stephen's Green, Dublin 2, Ireland
| |
Collapse
|
17
|
Sankaran SP, Sonis S. Network meta-analysis from a pairwise meta-analysis design: to assess the comparative effectiveness of oral care interventions in preventing ventilator-associated pneumonia in critically ill patients. Clin Oral Investig 2021; 25:2439-2447. [PMID: 33537946 DOI: 10.1007/s00784-021-03802-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/19/2021] [Indexed: 11/25/2022]
Abstract
OBJECTIVE In this research, we assessed the usefulness of network meta-analysis (NMA), in creating a hierarchy to define the most effective oral care intervention for the prevention and management of ventilation-associated pneumonia (VAP). MATERIALS AND METHODS We applied NMA to a previously published robust pairwise meta-analysis. Statistical analyses were based on comparing rates of total VAP events between intervention groups and placebo-usual care groups. We synthesized a netgraph, reported the ranking order of the interventions, and summarized output by a forest plot with a reference treatment placebo/usual care. RESULTS The results of this NMA are from the low and high risk of bias studies, and hence, we strongly recommend not to use findings of this NMA for clinical treatment needs, but based on results of the NMA, we highly recommend for future clinical trials. With our inclusion and exclusion criteria for the NMA, we extracted 25 studies (4473 subjects). The NMA included 16 treatments, 29 pairwise comparisons, and 15 designs. Based on results of NMA frequentist-ranking P scores, tooth brushing (P fixed-0.94, P random-0.89), tooth brushing with povidone-iodine (P fixed-0.90, P random-0.88), and furacillin (P fixed-0.88, P random-0.84) were the best three interventions for preventing VAP. CONCLUSIONS Any conclusion drawn from this NMA should be taken with caution and recommend future clinical trials with the results. CLINICAL RELEVANCE NMA appeared to be an effective platform from which multiple interventions reported in disparate clinical trials could be compared to derive a hierarchical assessment of efficacy in VAP intervention.
Collapse
Affiliation(s)
- Satheeshkumar P Sankaran
- Harvard Medical School, Boston, 02115, MA, USA.
- Department of Oral Oncology, Roswell Park Comprehensive Cancer Center, Buffalo, 14263, NY, USA.
| | - Stephen Sonis
- Brigham and Women's Hospital and the Harvard School of Dental Medicine, Boston, 02115, MA, USA
| |
Collapse
|
18
|
Kim D, Hasford J. Redundant trials can be prevented, if the EU clinical trial regulation is applied duly. BMC Med Ethics 2020; 21:107. [PMID: 33115456 PMCID: PMC7592564 DOI: 10.1186/s12910-020-00536-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 09/15/2020] [Indexed: 12/19/2022] Open
Abstract
The problem of wasteful clinical trials has been debated relentlessly in the medical community. To a significant extent, it is attributed to redundant trials - studies that are carried out to address questions, which can be answered satisfactorily on the basis of existing knowledge and accessible evidence from prior research. This article presents the first evaluation of the potential of the EU Clinical Trials Regulation 536/2014, which entered into force in 2014 but is expected to become applicable at the end of 2021, to prevent such trials. Having reviewed provisions related to the trial authorisation, we propose how certain regulatory requirements for the assessment of trial applications can and should be interpreted and applied by national research ethics committees and other relevant authorities in order to avoid redundant trials and, most importantly, preclude the unnecessary recruitment of trial participants and their unjustified exposure to health risks.
Collapse
Affiliation(s)
- Daria Kim
- Research Fellow, Max Planck Institute for Innovation and Competition, Marstallplatz 1, 81545 Munich, Germany
| | - Joerg Hasford
- Ludwig-Maximilians-University of Munich, The Institute for Medical Information Processing, Biometry, and Epidemiology, and Chairman of the Permanent Working Party of Research Ethics Committees in Germany, Scharnitzerstaße 7, 82166 Gräfelfing, Germany
| |
Collapse
|
19
|
Lund H, Juhl CB, Nørgaard B, Draborg E, Henriksen M, Andreasen J, Christensen R, Nasser M, Ciliska D, Clarke M, Tugwell P, Martin J, Blaine C, Brunnhuber K, Robinson KA. Evidence-Based Research Series-Paper 2 : Using an Evidence-Based Research approach before a new study is conducted to ensure value. J Clin Epidemiol 2020; 129:158-166. [PMID: 32987159 DOI: 10.1016/j.jclinepi.2020.07.019] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 07/20/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND AND OBJECTIVES There is considerable actual and potential waste in research. The aim of this article is to describe how using an evidence-based research approach before conducting a study helps to ensure that the new study truly adds value. STUDY DESIGN AND SETTING Evidence-based research is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner. In this second article of the evidence-based research series, we describe how to apply an evidence-based research approach before starting a new study. RESULTS Before a new study is performed, researchers need to provide a solid justification for it using the available scientific knowledge as well as the perspectives of end users. The key method for both is to conduct a systematic review of earlier relevant studies. CONCLUSION Describing the ideal process illuminates the challenges and opportunities offered through the suggested evidence-based research approach. A systematic and transparent approach is needed to provide justification for and to optimally design a relevant and necessary new study.
Collapse
Affiliation(s)
- Hans Lund
- Section for Evidence-based Practice, Western Norway University of Applied Sciences, Bergen, Norway.
| | - Carsten B Juhl
- Department of Sport Science and Clinical Biomechanics, University of Southern Denmark, Odense, Denmark; Department of Physiotherapy and Occupational Therapy, University Hospital of Copenhagen, Herlev & Gentofte, Denmark
| | - Birgitte Nørgaard
- Department of Public Health, University of Southern Denmark, Odense, Denmark
| | - Eva Draborg
- Department of Public Health, University of Southern Denmark, Odense, Denmark
| | - Marius Henriksen
- The Parker Institute, Bispebjerg and Frederiksberg Hospital, University of Copenhagen, Copenhagen, Denmark
| | - Jane Andreasen
- Department of Health, Science and Technology, Public Health and Epidemiology Group, Aalborg University, Alborg, Denmark; Department of Physical and Occupational Therapy, Aalborg University Hospital, Aalborg, Denmark
| | - Robin Christensen
- Musculoskeletal Statistics Unit, The Parker Institute, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; Department of Clinical Research, Research Unit of Rheumatology, University of Southern Denmark, Odense University Hospital, Denmark
| | - Mona Nasser
- Peninsula Dental School, Plymouth University, Plymouth, England, UK
| | - Donna Ciliska
- Section for Evidence-based Practice, Western Norway University of Applied Sciences, Bergen, Norway; School of Nursing, McMaster University, Hamilton, Canada
| | - Mike Clarke
- Northern Ireland Methodology Hub, Queen's University Belfast, Northern Ireland
| | - Peter Tugwell
- Department of Medicine, University of Ottawa, Ottawa, Canada
| | - Janet Martin
- MEDICI Centre, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada; Departments of Anesthesia & Biostatistics and Epidemiology & Biostatistics, Western University, London, Canada
| | | | - Klara Brunnhuber
- Digital Content Services, Data Platform Operations, Elsevier, London UK
| | | |
Collapse
|
20
|
Robinson KA, Brunnhuber K, Ciliska D, Juhl CB, Christensen R, Lund H. Evidence-Based Research Series-Paper 1: What Evidence-Based Research is and why is it important? J Clin Epidemiol 2020; 129:151-157. [PMID: 32979491 DOI: 10.1016/j.jclinepi.2020.07.020] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 07/20/2020] [Indexed: 01/28/2023]
Abstract
OBJECTIVES There is considerable actual and potential waste in research. Evidence-based research ensures worthwhile and valuable research. The aim of this series, which this article introduces, is to describe the evidence-based research approach. STUDY DESIGN AND SETTING In this first article of a three-article series, we introduce the evidence-based research approach. Evidence-based research is the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner. RESULTS We describe evidence-based research and provide an overview of the approach of systematically and transparently using previous research before starting a new study to justify and design the new study (article #2 in series) and-on study completion-place its results in the context with what is already known (article #3 in series). CONCLUSION This series introduces evidence-based research as an approach to minimize unnecessary and irrelevant clinical health research that is unscientific, wasteful, and unethical.
Collapse
Affiliation(s)
- Karen A Robinson
- Johns Hopkins Evidence-based Practice Center, Division of General Internal Medicine, Department of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Klara Brunnhuber
- Digital Content Services, Operations, Elsevier Ltd., 125 London Wall, London, EC2Y 5AS, UK
| | - Donna Ciliska
- School of Nursing, McMaster University, Health Sciences Centre, Room 2J20, 1280 Main Street West, Hamilton, Ontario, Canada, L8S 4K1; Section for Evidence-Based Practice, Western Norway University of Applied Sciences, Inndalsveien 28, Bergen, P.O.Box 7030 N-5020 Bergen, Norway
| | - Carsten Bogh Juhl
- Department of Sport Science and Clinical Biomechanics, University of Southern Denmark, Campusvej 55, 5230, Odense M, Denmark; Department of Physiotherapy and Occupational Therapy, University Hospital of Copenhagen, Herlev & Gentofte, Kildegaardsvej 28, 2900, Hellerup, Denmark
| | - Robin Christensen
- Musculoskeletal Statistics Unit, the Parker Institute, Bispebjerg and Frederiksberg Hospital, Copenhagen, Nordre Fasanvej 57, 2000, Copenhagen F, Denmark; Department of Clinical Research, Research Unit of Rheumatology, University of Southern Denmark, Odense University Hospital, Denmark
| | - Hans Lund
- Section for Evidence-Based Practice, Western Norway University of Applied Sciences, Inndalsveien 28, Bergen, P.O.Box 7030 N-5020 Bergen, Norway.
| | | |
Collapse
|
21
|
The fragility of statistically significant results from clinical nutrition randomized controlled trials. Clin Nutr 2019; 39:1284-1291. [PMID: 31221372 DOI: 10.1016/j.clnu.2019.05.024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 05/16/2019] [Accepted: 05/23/2019] [Indexed: 01/08/2023]
Abstract
BACKGROUND & AIMS Recently, a parameter called "Fragility index" (FI) has been proposed, which measures how many events the statistical significance relies on. The lower the FI the more "fragile" the results, and thus more care should be taken when interpreting the results. Our aim in this study was to check FI of nutritional trials. METHODS We conducted a systematic review of human clinical nutrition RCTs that report statistically significant dichotomous primary outcomes. We searched the EMBASE, MEDLINE, and Scopus databases. The FI of primary outcomes using the Fisher exact test was calculated and checked the correlations of FI with the number of randomised trials, the p-value of primary outcomes, the publication date, the journal impact factor and the number of patients lost to follow-up. RESULTS The initial database search revealed 5790 articles, 37 of which were included in qualitative synthesis. The median (IQR) FI for all studies was 1 (1-3). 28 studies (75.7%) had an FI lower or equal to 2, and in 12 (32.43%) articles, the FI was lower than the number of patients lost to follow-up. No correlations were found between FI and the study characteristics (number of randomized patients, p value of primary outcome, event ratio in experimental group, event ratio in control group, publication date, journal impact factor, lost to follow-up). CONCLUSION The results of RCTs in nutritional research often rely on a small number of events or patients. The number of patients lost to follow-up is frequently higher than the FI calculation. Formulating recommendations based on RCTs should be done with caution and FI may be used as auxiliary parameter when assessing the robustness of their findings.
Collapse
|
22
|
Peinemann F, Labeit A. Negative pressure wound therapy: A systematic review of randomized controlled trials from 2000 to 2017. J Evid Based Med 2019; 12:125-132. [PMID: 30460777 DOI: 10.1111/jebm.12324] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2018] [Accepted: 10/22/2018] [Indexed: 12/01/2022]
Abstract
BACKGROUND Negative pressure wound therapy (NPWT) proposes to provide better wound healing than standard wound management. Evidence quality of randomized controlled trials (RCTs) varies. METHODS We included participants with any kind of wounds and commercial as well as the homemade NPWT system. Comparators were any other wound dressing including variant NPWTs. We included RCTs randomizing patients or wounds in parallel or crossover designs. We searched PubMed and Cochrane Library on January 03, 2018. We assessed the risk of bias according to Cochrane and appropriateness of clinical endpoints according to the Food and Drug Administration (FDA). RESULTS We included 93 RCTs originating in 30 countries, 70 studies on open wounds and 23 studies on closed wounds. With respect to random sequence generation, we judged an unclear or high risk of bias in 50% (47 of 93) studies. With respect to allocation concealment, we judged an unclear or high risk of bias in 90% (84 of 93). We identified 41% (38 of 93) studies that based their conclusion on not appropriate endpoints. CONCLUSIONS High risk of bias concerning random sequence generation and allocation concealment limited the credibility of the majority of 93 included RCTs on NPWT. A low risk of bias can and should be achieved with both items, and we recommend to align future RCTs to Cochrane. Many primary clinical endpoints were deemed not valid for making inferences on the efficacy of NPWT. We recommend using patient-centered endpoints as requested by the FDA and suggested in the present systematic review.
Collapse
Affiliation(s)
- Frank Peinemann
- Children's Hospital, University Hospital, Cologne, Germany
- FOM University of Applied Science for Economics & Management, Essen, Germany
| | - Alexander Labeit
- Division of Population Health, Health Services Research and Primary Care, Faculty of Biological, Medical and Health Sciences, University of Manchester, Manchester, UK
| |
Collapse
|
23
|
Jones HE, Ades AE, Sutton AJ, Welton NJ. Use of a random effects meta-analysis in the design and analysis of a new clinical trial. Stat Med 2018; 37:4665-4679. [PMID: 30187505 PMCID: PMC6484819 DOI: 10.1002/sim.7948] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Revised: 06/29/2018] [Accepted: 07/28/2018] [Indexed: 01/08/2023]
Abstract
In designing a randomized controlled trial, it has been argued that trialists should consider existing evidence about the likely intervention effect. One approach is to form a prior distribution for the intervention effect based on a meta‐analysis of previous studies and then power the trial on its ability to affect the posterior distribution in a Bayesian analysis. Alternatively, methods have been proposed to calculate the power of the trial to influence the “pooled” estimate in an updated meta‐analysis. These two approaches can give very different results if the existing evidence is heterogeneous, summarised using a random effects meta‐analysis. We argue that the random effects mean will rarely represent the trialist's target parameter, and so, it will rarely be appropriate to power a trial based on its impact upon the random effects mean. Furthermore, the random effects mean will not generally provide an appropriate prior distribution. More appropriate alternatives include the predictive distribution and shrinkage estimate for the most similar study. Consideration of the impact of the trial on the entire random effects distribution might sometimes be appropriate. We describe how beliefs about likely sources of heterogeneity have implications for how the previous evidence should be used and can have a profound impact on the expected power of the new trial. We conclude that the likely causes of heterogeneity among existing studies need careful consideration. In the absence of explanations for heterogeneity, we suggest using the predictive distribution from the meta‐analysis as the basis for a prior distribution for the intervention effect.
Collapse
Affiliation(s)
- Hayley E Jones
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - A E Ades
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - Alex J Sutton
- Department of Health Sciences, University of Leicester, Leicester, UK
| | - Nicky J Welton
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| |
Collapse
|
24
|
Salanti G, Nikolakopoulou A, Sutton AJ, Reichenbach S, Trelle S, Naci H, Egger M. Planning a future randomized clinical trial based on a network of relevant past trials. Trials 2018; 19:365. [PMID: 29996869 PMCID: PMC6042258 DOI: 10.1186/s13063-018-2740-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Accepted: 06/12/2018] [Indexed: 11/16/2022] Open
Abstract
Background The important role of network meta-analysis of randomized clinical trials in health technology assessment and guideline development is increasingly recognized. This approach has the potential to obtain conclusive results earlier than with new standalone trials or conventional, pairwise meta-analyses. Methods Network meta-analyses can also be used to plan future trials. We introduce a four-step framework that aims to identify the optimal design for a new trial that will update the existing evidence while minimizing the required sample size. The new trial designed within this framework does not need to include all competing interventions and comparisons of interest and can contribute direct and indirect evidence to the updated network meta-analysis. We present the method by virtually planning a new trial to compare biologics in rheumatoid arthritis and a new trial to compare two drugs for relapsing-remitting multiple sclerosis. Results A trial design based on updating the evidence from a network meta-analysis of relevant previous trials may require a considerably smaller sample size to reach the same conclusion compared with a trial designed and analyzed in isolation. Challenges of the approach include the complexity of the methodology and the need for a coherent network meta-analysis of previous trials with little heterogeneity. Conclusions When used judiciously, conditional trial design could significantly reduce the required resources for a new study and prevent experimentation with an unnecessarily large number of participants. Electronic supplementary material The online version of this article (10.1186/s13063-018-2740-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.
| | - Adriani Nikolakopoulou
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Alex J Sutton
- Department of Health Sciences, College of Medicine, Biological Sciences and Psychology, University of Leicester, Leicester, UK
| | - Stephan Reichenbach
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.,Department of Rheumatology, Immunology and Allergiology, University Hospital, University of Bern, Bern, Switzerland
| | - Sven Trelle
- CTU Bern, University of Bern, Bern, Switzerland
| | - Huseyin Naci
- LSE Health, Department of Health Policy, London School of Economics and Political Science, London, UK
| | - Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
25
|
Nikolakopoulou A, Mavridis D, Furukawa TA, Cipriani A, Tricco AC, Straus SE, Siontis GCM, Egger M, Salanti G. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study. BMJ 2018; 360:k585. [PMID: 29490922 PMCID: PMC5829520 DOI: 10.1136/bmj.k585] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/22/2018] [Indexed: 12/01/2022]
Abstract
OBJECTIVE To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. DESIGN Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. DATA SOURCES Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. ELIGIBILITY CRITERIA FOR STUDY SELECTION Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). OUTCOMES AND ANALYSIS Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. RESULTS 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing the treatments of interest continued to be published for eight comparisons after strong evidence had become evident in network meta-analysis. CONCLUSIONS In comparative effectiveness research, prospectively planned living network meta-analyses produced strong evidence against the null hypothesis more often and earlier than conventional, pairwise meta-analyses.
Collapse
Affiliation(s)
- Adriani Nikolakopoulou
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Dimitris Mavridis
- Department of Primary Education, University of Ioannina, Ioannina, Greece
- Centre de Recherche Épidémiologie et Statistique Sorbonne Paris Cité, Inserm/Université Paris Descartes, Paris, France
| | - Toshi A Furukawa
- Departments of Health Promotion and Human Behavior and of Clinical Epidemiology, Kyoto University Graduate School of Medicine/School of Public Health, Kyoto, Japan
| | - Andrea Cipriani
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
- Oxford Health NHS Foundation Trust, Warneford Hospital, Oxford, UK
| | - Andrea C Tricco
- Knowledge Translation Program, Li Ka Shing Knowledge Institute, St Michael's Hospital, Toronto, Ontario, Canada
- Epidemiology Division, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Sharon E Straus
- Knowledge Translation Program, Li Ka Shing Knowledge Institute, St Michael's Hospital, Toronto, Ontario, Canada
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | | | - Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
26
|
Raftery J, Hanney S, Greenhalgh T, Glover M, Blatch-Jones A. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Health Technol Assess 2018; 20:1-254. [PMID: 27767013 DOI: 10.3310/hta20760] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND This report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review. OBJECTIVES (1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment (HTA) programme. DATA SOURCES We searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014. REVIEW METHODS This narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015. RESULTS The literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers. DISCUSSION The findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence's remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework (REF), which assesses the quality of universities' research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish® (researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established. LIMITATIONS There were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme. CONCLUSIONS Research funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines. FUNDING The National Institute for Health Research HTA programme.
Collapse
Affiliation(s)
- James Raftery
- Primary Care and Population Sciences, Faculty of Medicine, University of Southampton, Southampton General Hospital, Southampton, UK
| | - Steve Hanney
- Health Economics Research Group (HERG), Institute of Environment, Health and Societies, Brunel University London, London, UK
| | - Trish Greenhalgh
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Matthew Glover
- Health Economics Research Group (HERG), Institute of Environment, Health and Societies, Brunel University London, London, UK
| | - Amanda Blatch-Jones
- Wessex Institute, Faculty of Medicine, University of Southampton, Southampton, UK
| |
Collapse
|
27
|
Biau DJ, Boulezaz S, Casabianca L, Hamadouche M, Anract P, Chevret S. Using Bayesian statistics to estimate the likelihood a new trial will demonstrate the efficacy of a new treatment. BMC Med Res Methodol 2017; 17:128. [PMID: 28830464 PMCID: PMC5568256 DOI: 10.1186/s12874-017-0401-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Accepted: 08/02/2017] [Indexed: 12/29/2022] Open
Abstract
Background The common frequentist approach is limited in providing investigators with appropriate measures for conducting a new trial. To answer such important questions and one has to look at Bayesian statistics. Methods As a worked example, we conducted a Bayesian cumulative meta-analysis to summarize the benefit of patient-specific instrumentation on the alignment of total knee replacement from previously published evidence. Data were sourced from Medline, Embase, and Cochrane databases. All randomised controlled comparisons of the effect of patient-specific instrumentation on the coronal alignment of total knee replacement were included. The main outcome was the risk difference measured by the proportion of failures in the control group minus the proportion of failures in the experimental group. Through Bayesian statistics, we estimated cumulatively over publication time of the trial results: the posterior probabilities that the risk difference was more than 5 and 10%; the posterior probabilities that given the results of all previous published trials an additional fictive trial would achieve a risk difference of at least 5%; and the predictive probabilities that observed failure rate differ from 5% across arms. Results Thirteen trials were identified including 1092 patients, 554 in the experimental group and 538 in the control group. The cumulative mean risk difference was 0.5% (95% CrI: −5.7%; +4.5%). The posterior probabilities that the risk difference be superior to 5 and 10% was less than 5% after trial #4 and trial #2 respectively. The predictive probability that the difference in failure rates was at least 5% dropped from 45% after the first trial down to 11% after the 13th. Last, only unrealistic trial design parameters could change the overall evidence accumulated to date. Conclusions Bayesian probabilities are readily understandable when discussing the relevance of performing a new trial. It provides investigators the current probability that an experimental treatment be superior to a reference treatment. In case a trial is designed, it also provides the predictive probability that this new trial will reach the targeted risk difference in failure rates. Trial registration CRD42015024176. Electronic supplementary material The online version of this article (doi:10.1186/s12874-017-0401-x) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- David J Biau
- INSERM U1153, Paris, France. .,Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France. .,Université Paris-Descartes, Paris 5, Paris, France.
| | - Samuel Boulezaz
- Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Laurent Casabianca
- Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Moussa Hamadouche
- Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Philippe Anract
- INSERM U1153, Paris, France.,Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Sylvie Chevret
- INSERM U1153, Paris, France.,Université Paris-Diderot, Paris 7, Paris, France
| |
Collapse
|
28
|
Ekmekci PE. An increasing problem in publication ethics: Publication bias and editors' role in avoiding it. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2017; 20:171-178. [PMID: 28342053 DOI: 10.1007/s11019-017-9767-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Publication bias is defined as "the tendency on the parts of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or the strength of the study findings."Publication bias distorts the accumulated data in the literature, causes the over estimation of potential benefits of intervention and mantles the risks and adverse effects, and creates a barrier to assessing the clinical utility of drugs as well as evaluating the long-term safety of medical interventions. The World Medical Association, the International Committee of Medical Journals, and the Committee on Publication Ethics have conferred responsibilities and ethical obligations to editors concerning the avoidance of publication bias. Despite the explicit statements in these international documents, the editors' role in and ability to avoid publication bias is still being discussed. Unquestionably, all parties involved in clinical research have the ultimate responsibility to sustain the research integrity and validity of accumulated general knowledge. Cooperation and commitment is required at every step of a clinical trial. However, this holistic approach does not exclude effective measures to be taken at the editors' level. The editors of major medical journals concluded that one precaution that editors can take is to mandate registration of all clinical trials in a public repository as a precondition to submitting manuscripts to journals. Raising awareness regarding the value of publishing negative data for the scientific community and human health, and increasing the number of journals that are dedicated to publishing negative results or that set aside a section in their pages to do so, are positive steps editors can take to avoid publication bias.
Collapse
Affiliation(s)
- Perihan Elif Ekmekci
- Faculty of Medicine, Department of History of medicine and ethics, TOBB University of Economics and Technology, Söğütözü, Söğütözü Cd. No. 43, 06560, Ankara, Turkey.
| |
Collapse
|
29
|
Chow JTY, Lam K, Naeem A, Akanda ZZ, Si FF, Hodge W. The pathway to RCTs: how many roads are there? Examining the homogeneity of RCT justification. Trials 2017; 18:51. [PMID: 28148278 PMCID: PMC5288880 DOI: 10.1186/s13063-017-1804-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Accepted: 01/19/2017] [Indexed: 11/10/2022] Open
Abstract
Background Randomized controlled trials (RCTs) form the foundational background of modern medical practice. They are considered the highest quality of evidence, and their results help inform decisions concerning drug development and use, preventive therapies, and screening programs. However, the inputs that justify an RCT to be conducted have not been studied. Methods We reviewed the MEDLINE and EMBASE databases across six specialties (Ophthalmology, Otorhinolaryngology (ENT), General Surgery, Psychiatry, Obstetrics-Gynecology (OB-GYN), and Internal Medicine) and randomly chose 25 RCTs from each specialty except for Otorhinolaryngology (20 studies) and Internal Medicine (28 studies). For each RCT, we recorded information relating to the justification for conducting RCTs such as average study size cited, number of studies cited, and types of studies cited. The justification varied widely both within and between specialties. Results For Ophthalmology and OB-GYN, the average study sizes cited were around 1100 patients, whereas they were around 500 patients for Psychiatry and General Surgery. Between specialties, the average number of studies cited ranged from around 4.5 for ENT to around 10 for Ophthalmology, but the standard deviations were large, indicating that there was even more discrepancy within each specialty. When standardizing by the sample size of the RCT, some of the discrepancies between and within specialties can be explained, but not all. On average, Ophthalmology papers cited review articles the most (2.96 studies per RCT) compared to less than 1.5 studies per RCT for all other specialties. Conclusions The justifications for RCTs vary widely both within and between specialties, and the justification for conducting RCTs is not standardized. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-1804-z) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Jeffrey Tin Yu Chow
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Kevin Lam
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Abdul Naeem
- Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Zarique Z Akanda
- Faculty of Science, The University of Western Ontario, London, Canada
| | - Francie Fengqin Si
- Department of Ophthalmology, Ivey Eye Institute, St. Joseph's Health Care London, London, Canada
| | - William Hodge
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada. .,Department of Ophthalmology, Ivey Eye Institute, St. Joseph's Health Care London, London, Canada.
| |
Collapse
|
30
|
Frandsen TF, Nicolaisen J. Citation behavior: A large-scale test of the persuasion by name-dropping hypothesis. J Assoc Inf Sci Technol 2016. [DOI: 10.1002/asi.23746] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Affiliation(s)
| | - Jeppe Nicolaisen
- Royal School of Library and Information Science; University of Copenhagen; Birketinget 6 DK-2300 Copenhagen Denmark
| |
Collapse
|
31
|
Kulinskaya E, Huggins R, Dogo SH. Sequential biases in accumulating evidence. Res Synth Methods 2016; 7:294-305. [PMID: 26626562 PMCID: PMC5031232 DOI: 10.1002/jrsm.1185] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2015] [Revised: 07/03/2015] [Accepted: 08/27/2015] [Indexed: 11/10/2022]
Abstract
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed 'sequential decision bias' and 'sequential design bias', are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed-effect and the random-effects models of meta-analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence-based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Elena Kulinskaya
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| | - Richard Huggins
- Department of Mathematics and Statistics, University of Melbourne, Melbourne, Australia
| | - Samson Henry Dogo
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK
| |
Collapse
|
32
|
Bhurke S, Cook A, Tallant A, Young A, Williams E, Raftery J. Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort. BMC Med Res Methodol 2015; 15:108. [PMID: 26715462 PMCID: PMC4696153 DOI: 10.1186/s12874-015-0102-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2015] [Accepted: 12/11/2015] [Indexed: 11/12/2022] Open
Abstract
Background Chalmers and Glasziou’s paper published in 2014 recommends research funding bodies should mandate that proposals for additional primary research are built on systematic reviews of existing evidence showing what is already known. Jones et al. identified 11 (23 %) of 48 trials funded during 2006–8 by the National Institute for Health Research Health Technology Assessment (NIHR HTA) Programme did not reference a systematic review. This study did not explore the reasons for trials not referencing a systematic review or consider trials using any other evidence in the absence of a systematic review. Referencing a systematic review may not be possible in certain circumstances, for instance if the systematic review does not address the question being proposed in the trial. The current study extended Jones’ study by exploring the reasons for why trials did not reference a systematic review and included a more recent cohort of trials funded in 2013 to determine if there were any changes in the referencing or use of systematic reviews. Methods Two cohorts of NIHR HTA randomised controlled trials were included. Cohort I included the same trials as Jones et al. (with the exception of one trial which was discontinued). Cohort II included NIHR HTA trials funded in 2013. Data extraction was undertaken independently by two reviewers using full applications and trial protocols. Descriptive statistics was used and no formal statistical analyses were conducted. Results Five (11 %) trials of the 47 funded during 2006–2008 did not reference a systematic review. These 5 trials had warranted reasons for not referencing systematic reviews. All trials from Cohort II referenced a systematic review. A quarter of all those trials with a preceding systematic review used a different primary outcome than those stated in the reviews. Conclusions The NIHR requires that proposals for new primary research are justified by existing evidence and the findings of this study confirm the adherence to this requirement with a high rate of applications using systematic reviews. Electronic supplementary material The online version of this article (doi:10.1186/s12874-015-0102-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Sheetal Bhurke
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Andrew Cook
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK. .,University of Southampton and University Hospital Southampton NHS Foundation Trusts, Southampton, UK.
| | - Anna Tallant
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Amanda Young
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Elaine Williams
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - James Raftery
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK. .,University of Southampton and University Hospital Southampton NHS Foundation Trusts, Southampton, UK.
| |
Collapse
|
33
|
Curley GF, McAuley DF. Clinical trial design in prevention and treatment of acute respiratory distress syndrome. Clin Chest Med 2014; 35:713-27. [PMID: 25453420 DOI: 10.1016/j.ccm.2014.08.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Our ability to define appropriate molecular targets for preclinical development and develop better methods needs to be improved, to determine the clinical value of novel acute respiratory distress syndrome (ARDS) agents. Clinical trials must have realistic sample sizes and meaningful end points and use the available observation and meta-analytical data to inform design. Biomarker-driven studies or defined ARDS subsets should be considered to categorize specific at-risk populations most likely to benefit from a new treatment. Innovations in clinical trial design should be pursued to improve the outlook for future interventional trials in ARDS.
Collapse
Affiliation(s)
- Gerard F Curley
- Department of Anesthesia, Keenan Research Centre for Biomedical Science, Li Ka Shing Knowledge Institute, St Michael's Hospital, 30, Bond Street, Toronto, Ontario M5B 1W8, Canada
| | - Daniel F McAuley
- School of Medicine, Dentistry and Biomedical Science, Centre for Infection and Immunity, Queen's University Belfast, Health Sciences Building, 97 Lisburn Road, Belfast, Northern Ireland BT9 7BL, UK.
| |
Collapse
|
34
|
Affiliation(s)
| | - Magne Nylenna
- The Norwegian Knowledge Centre for the Health Services, Oslo, Norway
| |
Collapse
|
35
|
Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O'Brien MA, French SD, Young J, Odgaard-Jensen J. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med 2014; 29:1534-41. [PMID: 24965281 PMCID: PMC4238192 DOI: 10.1007/s11606-014-2913-y] [Citation(s) in RCA: 246] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
BACKGROUND This paper extends the findings of the Cochrane systematic review of audit and feedback on professional practice to explore the estimate of effect over time and examine whether new trials have added to knowledge regarding how optimize the effectiveness of audit and feedback. METHODS We searched the Cochrane Central Register of Controlled Trials, MEDLINE, and EMBASE for randomized trials of audit and feedback compared to usual care, with objectively measured outcomes assessing compliance with intended professional practice. Two reviewers independently screened articles and abstracted variables related to the intervention, the context, and trial methodology. The median absolute risk difference in compliance with intended professional practice was determined for each study, and adjusted for baseline performance. The effect size across studies was recalculated as studies were added to the cumulative analysis. Meta-regressions were conducted for studies published up to 2002, 2006, and 2010 in which characteristics of the intervention, the recipients, and trial risk of bias were tested as predictors of effect size. RESULTS Of the 140 randomized clinical trials (RCTs) included in the Cochrane review, 98 comparisons from 62 studies met the criteria for inclusion. The cumulative analysis indicated that the effect size became stable in 2003 after 51 comparisons from 30 trials. Cumulative meta-regressions suggested new trials are contributing little further information regarding the impact of common effect modifiers. Feedback appears most effective when: delivered by a supervisor or respected colleague; presented frequently; featuring both specific goals and action-plans; aiming to decrease the targeted behavior; baseline performance is lower; and recipients are non-physicians. DISCUSSION There is substantial evidence that audit and feedback can effectively improve quality of care, but little evidence of progress in the field. There are opportunity costs for patients, providers, and health care systems when investigators test quality improvement interventions that do not build upon, or contribute toward, extant knowledge.
Collapse
Affiliation(s)
- Noah M Ivers
- Family Practice Health Centre and Institute for Health Systems Solutions and Virtual Care, Women's College Hospital, Toronto, Ontario, Canada,
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Naci H, Ioannidis JPA. How good is "evidence" from clinical studies of drug effects and why might such evidence fail in the prediction of the clinical utility of drugs? Annu Rev Pharmacol Toxicol 2014; 55:169-89. [PMID: 25149917 DOI: 10.1146/annurev-pharmtox-010814-124614] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Promising evidence from clinical studies of drug effects does not always translate to improvements in patient outcomes. In this review, we discuss why early evidence is often ill suited to the task of predicting the clinical utility of drugs. The current gap between initially described drug effects and their subsequent clinical utility results from deficits in the design, conduct, analysis, reporting, and synthesis of clinical studies-often creating conditions that generate favorable, but ultimately incorrect, conclusions regarding drug effects. There are potential solutions that could improve the relevance of clinical evidence in predicting the real-world effectiveness of drugs. What is needed is a new emphasis on clinical utility, with nonconflicted entities playing a greater role in the generation, synthesis, and interpretation of clinical evidence. Clinical studies should adopt strong design features, reflect clinical practice, and evaluate outcomes and comparisons that are meaningful to patients. Transformative changes to the research agenda may generate more meaningful and accurate evidence on drug effects to guide clinical decision making.
Collapse
Affiliation(s)
- Huseyin Naci
- LSE Health, London School of Economics and Political Science, London WC2A 2AE, United Kingdom;
| | | |
Collapse
|
37
|
Lund H. From evidence-based practice to evidence-based research – Reaching research-worthy problems by applying an evidence-based approach. EUROPEAN JOURNAL OF PHYSIOTHERAPY 2014. [DOI: 10.3109/21679169.2014.917838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
38
|
Cook JA, Hislop JM, Altman DG, Briggs AH, Fayers PM, Norrie JD, Ramsay CR, Harvey IM, Vale LD. Use of methods for specifying the target difference in randomised controlled trial sample size calculations: Two surveys of trialists' practice. Clin Trials 2014; 11:300-308. [PMID: 24603006 DOI: 10.1177/1740774514521907] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Central to the design of a randomised controlled trial (RCT) is a calculation of the number of participants needed. This is typically achieved by specifying a target difference, which enables the trial to identify a difference of a particular magnitude should one exist. Seven methods have been proposed for formally determining what the target difference should be. However, in practice, it may be driven by convenience or some other informal basis. It is unclear how aware the trialist community is of these formal methods or whether they are used. PURPOSE To determine current practice regarding the specification of the target difference by surveying trialists. METHODS Two surveys were conducted: (1) Members of the Society for Clinical Trials (SCT): participants were invited to complete an online survey through the society's email distribution list. Respondents were asked about their awareness, use of, and willingness to recommend methods; (2) Leading UK- and Ireland-based trialists: the survey was sent to UK Clinical Research Collaboration registered Clinical Trials Units, Medical Research Council UK Hubs for Trial Methodology Research, and the Research Design Services of the National Institute for Health Research. This survey also included questions about the most recent trial developed by the respondent's group. RESULTS Survey 1: Of the 1182 members on the SCT membership email distribution list, 180 responses were received (15%). Awareness of methods ranged from 69 (38%) for health economic methods to 162 (90%) for pilot study. Willingness to recommend among those who had used a particular method ranged from 56% for the opinion-seeking method to 89% for the review of evidence-base method. Survey 2: Of the 61 surveys sent out, 34 (56%) responses were received. Awareness of methods ranged from 33 (97%) for the review of evidence-base and pilot methods to 14 (41%) for the distribution method. The highest level of willingness to recommend among users was for the anchor method (87%). Based upon the most recent trial, the target difference was usually one viewed as important by a stakeholder group, mostly also viewed as a realistic difference given the interventions under evaluation, and sometimes one that led to an achievable sample size. LIMITATIONS The response rates achieved were relatively low despite the surveys being short, well presented, and having utilised reminders. CONCLUSION Substantial variations in practice exist with awareness, use, and willingness to recommend methods varying substantially. The findings support the view that sample size calculation is a more complex process than would appear to be the case from trial reports and protocols. Guidance on approaches for sample size estimation may increase both awareness and use of appropriate formal methods.
Collapse
Affiliation(s)
- Jonathan A Cook
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Jennifer M Hislop
- Institute of Health & Society, Newcastle University, Newcastle upon Tyne, UK
| | - Doug G Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Andrew H Briggs
- Health Economics and Health Technology Assessment, University of Glasgow, Glasgow, UK
| | - Peter M Fayers
- Population Health, University of Aberdeen, Aberdeen, UK Department of Cancer Research and Molecular, Norwegian University of Science and Technology, Trondheim, Norway
| | - John D Norrie
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Craig R Ramsay
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Ian M Harvey
- Faculty of Medicine and Health Sciences, University of East Anglia, Norwich, UK
| | - Luke D Vale
- Institute of Health & Society, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
39
|
Affiliation(s)
- Elena Kulinskaya
- School of Computing Sciences; University of East Anglia; Norwich NR4 7TJ UK
| | - Stephan Morgenthaler
- Ecole polytechnique fédérale de Lausanne (EPFL); Station 8, 1015 Lausanne Switzerland
| | - Robert G. Staudte
- Department of Statistics and Mathematics; La Trobe University; Melbourne, VIC 3086 Australia
| |
Collapse
|
40
|
Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JPA, Oliver S. How to increase value and reduce waste when research priorities are set. Lancet 2014; 383:156-65. [PMID: 24411644 DOI: 10.1016/s0140-6736(13)62229-1] [Citation(s) in RCA: 871] [Impact Index Per Article: 87.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The increase in annual global investment in biomedical research--reaching US$240 billion in 2010--has resulted in important health dividends for patients and the public. However, much research does not lead to worthwhile achievements, partly because some studies are done to improve understanding of basic mechanisms that might not have relevance for human health. Additionally, good research ideas often do not yield the anticipated results. As long as the way in which these ideas are prioritised for research is transparent and warranted, these disappointments should not be deemed wasteful; they are simply an inevitable feature of the way science works. However, some sources of waste cannot be justified. In this report, we discuss how avoidable waste can be considered when research priorities are set. We have four recommendations. First, ways to improve the yield from basic research should be investigated. Second, the transparency of processes by which funders prioritise important uncertainties should be increased, making clear how they take account of the needs of potential users of research. Third, investment in additional research should always be preceded by systematic assessment of existing evidence. Fourth, sources of information about research that is in progress should be strengthened and developed and used by researchers. Research funders have primary responsibility for reductions in waste resulting from decisions about what research to do.
Collapse
Affiliation(s)
| | - Michael B Bracken
- School of Public Health and School of Medicine, Yale University, New Haven, CT, USA
| | - Ben Djulbegovic
- Center for Evidence-Based Medicine and Health Outcomes Research, Division of Internal Medicine, University of South Florida, Tampa, FL, USA; Department of Hematology and Department of Health Outcomes and Behavior, H Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - Silvio Garattini
- Istituto di Ricovero e Cura a Carattere Scientifico Istituto di Ricerche Farmacologiche Mario Negri, Milan, Italy
| | | | - A Metin Gülmezoglu
- UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), WHO, Geneva, Switzerland
| | - David W Howells
- Florey Institute of Neuroscience and Mental Health, Melbourne, VIC, Australia
| | - John P A Ioannidis
- Stanford Prevention Research Center, Department of Medicine, School of Medicine, Stanford University, Stanford, CA, USA; Division of Epidemiology, Department of Health Research and Policy, School of Medicine, Stanford University, Stanford, CA, USA; Department of Statistics, School of Humanities and Sciences, Stanford University, Stanford, CA, USA; Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
| | | |
Collapse
|
41
|
Kulinskaya E, Wood J. Trial sequential methods for meta-analysis. Res Synth Methods 2013; 5:212-20. [PMID: 26052847 DOI: 10.1002/jrsm.1104] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2013] [Revised: 10/09/2013] [Accepted: 10/20/2013] [Indexed: 11/07/2022]
Abstract
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual difficulties arise in the random effects model. One approach applying sequential meta-analysis to design is 'trial sequential analysis', developed by Wetterslev, Thorlund, Brok, Gluud and others from the Copenhagen Trial Unit. In trial sequential analysis, information size is based on the required sample size of a single new trial, which, in the random effects model, is obtained by simply inflating it in comparison with fixed effects meta-analysis. However, this is not sufficient as, depending on the amount of heterogeneity, a minimum of several new trials may be indicated, and the total number of new patients needed may be substantially reduced by planning an even larger number of small trials. We provide explicit formulae to determine the requisite minimum number of trials and their sample sizes within this framework, which also exemplify the conceptual difficulties referred to. We illustrate all these points with two practical examples, including the well-known meta-analysis of magnesium for myocardial infarction.
Collapse
Affiliation(s)
- Elena Kulinskaya
- School of Computing Sciences, University of East Anglia, Norwich, U.K
| | | |
Collapse
|
42
|
Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol 2013; 13:50. [PMID: 23530582 PMCID: PMC3621166 DOI: 10.1186/1471-2288-13-50] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2012] [Accepted: 03/13/2013] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND A systematic review, with or without a meta-analysis, should be undertaken to determine if the research question of interest has already been answered before a new trial begins. There has been limited research on how systematic reviews are used within the design of new trials, the aims of this study were to investigate how systematic reviews of earlier trials are used in the planning and design of new randomised trials. METHODS Documentation from the application process for all randomised trials funded by the National Institute for Health Research Health Technology Assessment (NIHR HTA) between 2006 and 2008 were obtained. This included the: commissioning brief (if appropriate), outline application, minutes of the Board meeting in which the outline application was discussed, full application, detailed project description, referee comments, investigator response to referee comments, Board minutes on the full application and the trial protocol. Data were extracted on references to systematic reviews and how any such reviews had been used in the planning and design of the trial. RESULTS 50 randomised trials were funded by NIHR HTA during this period and documentation was available for 48 of these. The cohort was predominately individually randomised parallel trials aiming to detect superiority between two treatments for a single primary outcome. 37 trials (77.1%) referenced a systematic review within the application and 20 of these (i.e. 41.7% of the total) used information contained in the systematic review in the design or planning of the new trial. The main areas in which systematic reviews were used were in the selection or definition of an outcome to be measured in the trial (7 of 37, 18.9%), the sample size calculation (7, 18.9%), the duration of follow up (8, 21.6%) and the approach to describing adverse events (9, 24.3%). Boards did not comment on the presence/absence or use of systematic reviews in any application. CONCLUSIONS Systematic reviews were referenced in most funded applications but just over half of these used the review to inform the design. There is an expectation from funders that applicants will use a systematic review to justify the need for a new trial but no expectation regarding further use of a systematic review to aid planning and design of the trial. Guidelines for applicants and funders should be developed to promote the use of systematic reviews in the design and planning of randomised trials, to optimise delivery of new studies informed by the most up-to-date evidence base and to minimise waste in research.
Collapse
Affiliation(s)
- Ashley P Jones
- Department of Biostatistics, Faculty of Health & Life Sciences University of Liverpool, Brownlow Street, Liverpool L69 3GS, UK.
| | | | | | | | | |
Collapse
|
43
|
Gamble C, Wolf A, Sinha I, Spowart C, Williamson P. The role of systematic reviews in pharmacovigilance planning and Clinical Trials Authorisation application: example from the SLEEPS trial. PLoS One 2013; 8:e51787. [PMID: 23554852 PMCID: PMC3598865 DOI: 10.1371/journal.pone.0051787] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2012] [Accepted: 11/07/2012] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Adequate sedation is crucial to the management of children requiring assisted ventilation on Paediatric Intensive Care Units (PICU). The evidence-base of randomised controlled trials (RCTs) in this area is small and a trial was planned to compare midazolam and clonidine, two sedatives widely used within PICUs neither of which being licensed for that use. The application to obtain a Clinical Trials Authorisation from the Medicines and Healthcare products Regulatory Agency (MHRA) required a dossier summarising the safety profiles of each drug and the pharmacovigilance plan for the trial needed to be determined by this information. A systematic review was undertaken to identify reports relating to the safety of each drug. METHODOLOGY/PRINCIPAL FINDINGS The Summary of Product Characteristics (SmPC) were obtained for each sedative. The MHRA were requested to provide reports relating to the use of each drug as a sedative in children under the age of 16. Medline was searched to identify RCTs, controlled clinical trials, observational studies, case reports and series. 288 abstracts were identified for midazolam and 16 for clonidine with full texts obtained for 80 and 6 articles respectively. Thirty-three studies provided data for midazolam and two for clonidine. The majority of data has come from observational studies and case reports. The MHRA provided details of 10 and 3 reports of suspected adverse drug reactions. CONCLUSIONS/SIGNIFICANCE No adverse reactions were identified in addition to those specified within the SmPC for the licensed use of the drugs. Based on this information and the wide spread use of both sedatives in routine practice the pharmacovigilance plan was restricted to adverse reactions. The Clinical Trials Authorisation was granted based on the data presented in the SmPC and the pharmacovigilance plan within the clinical trial protocol restricting collection and reporting to adverse reactions.
Collapse
Affiliation(s)
- Carrol Gamble
- Clinical Trials Research Centre, University of Liverpool, Liverpool, Merseyside, United Kingdom.
| | | | | | | | | |
Collapse
|
44
|
Chan AW, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, Berlin JA, Dickersin K, Hróbjartsson A, Schulz KF, Parulekar WR, Krleza-Jeric K, Laupacis A, Moher D. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 2013; 346:e7586. [PMID: 23303884 PMCID: PMC3541470 DOI: 10.1136/bmj.e7586] [Citation(s) in RCA: 3402] [Impact Index Per Article: 309.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/04/2012] [Indexed: 02/06/2023]
Abstract
High quality protocols facilitate proper conduct, reporting, and external review of clinical trials. However, the completeness of trial protocols is often inadequate. To help improve the content and quality of protocols, an international group of stakeholders developed the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials). The SPIRIT Statement provides guidance in the form of a checklist of recommended items to include in a clinical trial protocol. This SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations. For each checklist item, we provide a rationale and detailed description; a model example from an actual protocol; and relevant references supporting its importance. We strongly recommend that this explanatory paper be used in conjunction with the SPIRIT Statement. A website of resources is also available (www.spirit-statement.org). The SPIRIT 2013 Explanation and Elaboration paper, together with the Statement, should help with the drafting of trial protocols. Complete documentation of key trial elements can facilitate transparency and protocol review for the benefit of all stakeholders.
Collapse
Affiliation(s)
- An-Wen Chan
- Women's College Research Institute at Women's College Hospital, Department of Medicine, University of Toronto, Toronto, Canada M5G 1N8
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
45
|
Scott IA, Glasziou PP. Improving the effectiveness of clinical medicine: the need for better science. Med J Aust 2012; 196:304-8. [PMID: 22432658 DOI: 10.5694/mja11.10364] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2011] [Accepted: 08/29/2011] [Indexed: 11/17/2022]
Abstract
Effective clinical practice is predicated on valid and relevant clinical science - a commodity in increasingly short supply. The pre-eminent place of clinical research has become tainted by methodological shortcomings, commercial influences and neglect of the needs of patients and clinicians. Researchers need to be more proactive in evaluating clinical interventions in terms of patient-important benefit, wide applicability and comparative effectiveness, and in adopting study designs and reporting standards that ensure accurate and transparent research outputs. Funders of research need to be more supportive of applied clinical research that rigorously evaluates effectiveness of new treatments and synthesis existing knowledge into clinically useful systematic reviews. Several strategies for improving the state of the science are possible but their implementation requires collective action of all those undertaking and reporting clinical research.
Collapse
Affiliation(s)
- Ian A Scott
- Department of Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, QLD, Australia.
| | | |
Collapse
|
46
|
Langan D, Higgins JPT, Gregory W, Sutton AJ. Graphical augmentations to the funnel plot assess the impact of additional evidence on a meta-analysis. J Clin Epidemiol 2012; 65:511-9. [PMID: 22342263 DOI: 10.1016/j.jclinepi.2011.10.009] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2011] [Revised: 10/21/2011] [Accepted: 10/25/2011] [Indexed: 12/20/2022]
Abstract
OBJECTIVE We aim to illustrate the potential impact of a new study on a meta-analysis, which gives an indication of the robustness of the meta-analysis. STUDY DESIGN AND SETTING A number of augmentations are proposed to one of the most widely used of graphical displays, the funnel plot. Namely, 1) statistical significance contours, which define regions of the funnel plot in which a new study would have to be located to change the statistical significance of the meta-analysis; and 2) heterogeneity contours, which show how a new study would affect the extent of heterogeneity in a given meta-analysis. Several other features are also described, and the use of multiple features simultaneously is considered. RESULTS The statistical significance contours suggest that one additional study, no matter how large, may have a very limited impact on the statistical significance of a meta-analysis. The heterogeneity contours illustrate that one outlying study can increase the level of heterogeneity dramatically. CONCLUSION The additional features of the funnel plot have applications including 1) informing sample size calculations for the design of future studies eligible for inclusion in the meta-analysis; and 2) informing the updating prioritization of a portfolio of meta-analyses such as those prepared by the Cochrane Collaboration.
Collapse
Affiliation(s)
- Dean Langan
- Clinical Trials Research Unit (CTRU), University of Leeds, 71-75 Clarendon Road, Leeds, West Yorkshire, LS2 9JT, UK.
| | | | | | | |
Collapse
|
47
|
Viechtbauer W. Learning from the past: refining the way we study treatments. J Clin Epidemiol 2010; 63:980-2. [DOI: 10.1016/j.jclinepi.2010.04.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2010] [Accepted: 04/12/2010] [Indexed: 10/19/2022]
|