1
|
Characterization changes and research waste in randomized controlled trials of global bariatric surgery over the past 20 years: cross-sectional study. Int J Surg 2024; 110:1420-1429. [PMID: 38116657 PMCID: PMC10942146 DOI: 10.1097/js9.0000000000001013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 12/04/2023] [Indexed: 12/21/2023]
Abstract
BACKGROUND The results of several large randomized controlled trials (RCTs) have changed the clinical practice of bariatric surgery. However, the characteristics of global RCTs of bariatric surgery have not been reported internationally and whether there was research waste in these RCTs is unknown. METHODS Search ClinicalTrials.gov for bariatric surgery RCTs registered between January 2000 and December 2022 with the keywords 'Roux-en-Y gastric-bypass' and 'Sleeve Gastrectomy'. The above analysis was conducted in January 2023. RESULTS A total of 326 RCTs were included in this study. The number of RCTs registered for sleeve gastrectomy and gastric bypass surgery increased year by year globally. Europe has always accounted for the largest proportion, Asia has gradually increased, and North America has decreased. A total of 171 RCTs were included in the analysis of waste, of which 74 (43.8%) were published. Of the 74 published RCTs, 37 (37/74, 50.0%) were judged to be adequately reported and 36 (36/74, 48.6%) were judged to have avoidable design defects. In the end, 143 RCTs (143/171, 83.6%) had at least one research waste. Body weight change as the primary endpoint (OR: 0.266, 95% CI: 0.103-0.687, P =0.006) and enrolment greater than 100 (OR: 0.349, 95% CI: 0.146-0.832, P =0.018) were independent protective factors for research waste. CONCLUSIONS This study for the first time describes the characteristic changes of the mainstream RCT of bariatric surgery globally in the last 20 years and identifies a high research waste burden and predictive factor in this area, which provides reference evidence for carrying out bariatric surgery RCTs more rationally.
Collapse
|
2
|
A Decade of Efforts to Add Value to Child Health Research Practices. J Pediatr 2024; 265:113840. [PMID: 38000771 DOI: 10.1016/j.jpeds.2023.113840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 10/25/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
OBJECTIVE To identify practices that add value to improve the design, conduct, and reporting of child health research and reduce research waste. STUDY DESIGN In order to categorize the contributions of members of Standards for Research (StaR) in Child Health network, we developed a novel Child Health Improving Research Practices (CHIRP) framework comprised of 5 domains meant to counteract avoidable child health research waste and improve quality: 1) address research questions relevant to children, their families, clinicians, and researchers; 2) apply appropriate research design, conduct and analysis; 3) ensure efficient research oversight and regulation; 4) Provide accessible research protocols and reports; and 5) develop unbiased and usable research reports, including 17 responsible research practice recommendations. All child health research relevant publications by the 48 original StaR standards' authors over the last decade were identified, and main topic areas were categorized using this framework. RESULTS A total of 247 publications were included in the final sample: 100 publications (41%) in domain 1 (3 recommendations), 77 publications (31%) in domain 2 (3), 35 publications (14%) in domain 3 (4), 20 publications (8%) in domain 4 (4), and 15 publications (6%) in domain 5 (3). We identified readily implementable "responsible" research practices to counter child health research waste and improve quality, especially in the areas of patients and families' engagement throughout the research process, developing Core Outcome Sets, and addressing ethics and regulatory oversight issues. CONCLUSION While most of the practices are readily implementable, increased awareness of methodological issues and wider guideline uptake is needed to improve child health research. The CHIRP Framework can be used to guide responsible research practices that add value to child health research.
Collapse
|
3
|
Is biomedical research self-correcting? Modelling insights on the persistence of spurious science. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231056. [PMID: 38298396 PMCID: PMC10827424 DOI: 10.1098/rsos.231056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/08/2024] [Indexed: 02/02/2024]
Abstract
The reality that volumes of published biomedical research are not reproducible is an increasingly recognized problem. Spurious results reduce trustworthiness of reported science, increasing research waste. While science should be self-correcting from a philosophical perspective, that in insolation yields no information on efforts required to nullify suspect findings or factors shaping how quickly science may be corrected. There is also a paucity of information on how perverse incentives in the publishing ecosystem favouring novel positive findings over null results shape the ability of published science to self-correct. Knowledge of factors shaping self-correction of science remain obscure, limiting our ability to mitigate harms. This modelling study introduces a simple model to capture dynamics of the publication ecosystem, exploring factors influencing research waste, trustworthiness, corrective effort and time to correction. Results from this work indicate that research waste and corrective effort are highly dependent on field-specific false positive rates and time delays to corrective results to spurious findings are propagated. The model also suggests conditions under which biomedical science is self-correcting and those under which publication of correctives alone cannot stem propagation of untrustworthy results. Finally, this work models a variety of potential mitigation strategies, including researcher- and publisher-driven interventions.
Collapse
|
4
|
The trinity of good research: Distinguishing between research integrity, ethics, and governance. Account Res 2023:1-20. [PMID: 37475134 DOI: 10.1080/08989621.2023.2239712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 07/18/2023] [Indexed: 07/22/2023]
Abstract
The words integrity, ethics, and governance are used interchangeably in relation to research. This masks important differences that must be understood when trying to address concerns regarding research culture. While progress has been made in identifying negative aspects of research culture (such as inequalities in hiring/promotion, perverse incentives, etc.) and practical issues that lead to research waste (outcome reporting bias, reproducibility, etc.), the responsibility for addressing these problems can be unclear due to the complexity of the research environment. One solution is to provide a clearer distinction between the perspectives of "Research Integrity," "Research Ethics," and "Research Governance." Here, it is proposed that Research Integrity should be understood as focused on the character of researchers, and consequently the responsibility for promoting it lies primarily with researchers themselves. This is a different perspective from Research Ethics, which is focused on judgments on the ethical acceptability of research, and should primarily be the responsibility of research ethics committees, often including input from the public as well as the research community. Finally, Research Governance focuses on legal and policy requirements, and although complementary to research integrity and ethics, is primarily the responsibility of expert research support officers with the skills and experience to address technical compliance.
Collapse
|
5
|
Are European clinical trial funders policies on clinical trial registration and reporting improving? A cross-sectional study. J Clin Transl Sci 2023; 7:e166. [PMID: 37588679 PMCID: PMC10425870 DOI: 10.1017/cts.2023.590] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 06/23/2023] [Accepted: 07/06/2023] [Indexed: 08/16/2023] Open
Abstract
Objectives Assess the extent to which the clinical trial registration and reporting policies of 25 of the world's largest public and philanthropic medical research funders meet best practice benchmarks as stipulated by the 2017 WHO Joint Statement, and document changes in the policies and monitoring systems of 19 European funders over the past year. Design Setting Participants Cross-sectional study, based on assessments of each funder's publicly available documentation plus validation of results by funders. Our cohort includes 25 of the largest medical research funders in Europe, Oceania, South Asia, and Canada. Interventions Scoring all 25 funders using an 11-item assessment tool based on WHO best practice benchmarks, grouped into three primary categories: trial registries, academic publication, and monitoring, plus validation of results by funders. Main outcome measures How many of the 11 WHO best practice items each of the 25 funders has put into place, and changes in the performance of 19 previously assessed funders over the preceding year. Results The 25 funders we assessed had put into place an average of 5/11 (49%) WHO best practices. Only 6/25 funders (24%) took the PI's past reporting record into account during grant application reviews. Funders' performance varied widely from 0/11 to 11/11 WHO best practices adopted. Of the 19 funders for which 2021(2) baseline data was available, 10/19 (53%) had strengthened their policies over the preceding year. Conclusions Most medical research funders need to do more to curb research waste and publication bias by strengthening their clinical trial policies.
Collapse
|
6
|
How to limit uninformative trials: Results from a Delphi working group. MED 2023; 4:226-232. [PMID: 37060899 DOI: 10.1016/j.medj.2023.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/01/2023] [Accepted: 03/13/2023] [Indexed: 04/17/2023]
Abstract
To be justifiable, clinical trials must test novel hypotheses and produce informative results. However, many trials fail on this score. A Delphi process was used to establish consensus on 35 recommendations across five domains related to the role of scientific review in preventing uninformative trials.
Collapse
|
7
|
Implementation strategies for high impact nephrology trials: the end of the trial is just the beginning. Kidney Int 2022; 102:1222-1227. [PMID: 35926657 DOI: 10.1016/j.kint.2022.07.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 06/10/2022] [Accepted: 07/11/2022] [Indexed: 01/12/2023]
|
8
|
Review finds core outcome set uptake in new studies and systematic reviews needs improvement. J Clin Epidemiol 2022; 150:154-164. [PMID: 35779824 DOI: 10.1016/j.jclinepi.2022.06.016] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/24/2022] [Accepted: 06/24/2022] [Indexed: 01/19/2023]
Abstract
OBJECTIVE To review evidence about the uptake of core outcome sets (COS). A COS is an agreed standardized set of outcomes that should be measured and reported, as a minimum, in all clinical trials in a specific area of health or health care. STUDY DESIGN AND SETTING This article provides an analysis of what is known about the uptake of COS in research. Similarities between COS and outcomes recommended by stakeholders in the evidence ecosystem is reviewed, and actions taken by them to facilitate COS uptake described. RESULTS COS uptake is low in most research areas. Common facilitators relate to trialist awareness and understanding. Common barriers were not including in the development process all specialties who might use the COS, and the lack of recommendations for how to measure the outcomes. Increasingly, COS developers are considering strategies for promoting uptake earlier in the process, including actions beyond traditional dissemination approaches. Overlap between COS and outcomes in regulatory documents and health technology assessments is good. An increasing number and variety of organisations are recommending COS be considered. CONCLUSION We suggest actions for various stakeholders for improving COS uptake. Research is needed to assess the impact of these actions to identify effective evidence-based strategies.
Collapse
|
9
|
Improving clinical trial transparency at UK universities: Evaluating 3 years of policies and reporting performance on the European Clinical Trial Register. Clin Trials 2022; 19:217-223. [PMID: 35168372 PMCID: PMC9036155 DOI: 10.1177/17407745211071015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND January 2019, the House of Commons' Science and Technology Committee sent letters to UK universities admonishing them to achieve compliance with results reporting requirements for Clinical Trials of Investigative Medicinal Products by summer 2019. This study documents changes in the clinical trial policies and Clinical Trials of Investigative Medicinal Product reporting performance of 20 major UK universities following that intervention. METHODS Freedom of Information requests were filed in June 2018 and June 2020 to obtain clinical trial registration and reporting policies covering both Clinical Trials of Investigative Medicinal Products and all other clinical trials. Two independent reviewers assessed policies against transparency benchmarks based on World Health Organization best practices. To evaluate universities' trial reporting performance, we used a public online tracking tool, the European Union Trials Tracker, which assesses universities' compliance with regulatory Clinical Trials of Investigative Medicinal Product disclosure requirements on the European Clinical Trial Register. Specifically, we evaluated whether universities were adhering to the European Union requirement to post summary results on the trial registry within 12 months of completion. RESULTS Mean policy strength increased from 2.8 to 4.9 points (out of a maximum of 7 points) between June 2018 and June 2020. In October 2018 the average percentage of due Clinical Trials of Investigative Medicinal Products that had results available on the European trial registry across university sponsors included in the cohort was 29%. By June 2021, this had increased to 91%, with 5 universities achieving a reporting performance of 100%. All 20 universities reported more than 70% of their due trial results on the European trial registry. INTERPRETATION Political pressure appears to have a significant positive impact on UK universities' clinical trial reporting policies and performance. Similar approaches could be used to improve reporting performance for other types of sponsors, other types of trials, and in other countries.
Collapse
|
10
|
In-depth qualitative interviews identified barriers and facilitators that influenced chief investigators' use of core outcome sets in randomised controlled trials. J Clin Epidemiol 2021; 144:111-120. [PMID: 34896233 PMCID: PMC9094758 DOI: 10.1016/j.jclinepi.2021.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 11/15/2021] [Accepted: 12/02/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE This study aimed to investigate barriers and facilitators to core outcome set (COS) uptake in randomised controlled trials to inform the first steps in developing interventions to improve the uptake of COS. STUDY DESIGN AND SETTING Semi-structured qualitative interviews with a purposive sample of UK chief investigators were audio-recorded, transcribed and analysed thematically. Where appropriate, barriers and facilitators were mapped to components of behaviour informed by the COM-B model of behaviour. RESULTS Thirteen chief investigators were interviewed. Facilitators to uptake included: the behaviour of investigators, for example, their awareness and understanding of COS; and the wider research system, for example, recommendations to use COS from funders and journals. Barriers to uptake included: the perceived characteristics of COS, for example, increasing patient burden and recommendations becoming outdated; and the COS development process, for example, not including all specialties who will use the COS. CONCLUSIONS Based on the barriers and facilitators identified, recommendations to improve COS uptake include ensuring engagement with the research community who will use the COS, involving patients in the development of COS and ensuring COS remain up to date.
Collapse
|
11
|
An international core outcome set for evaluating interventions to improve informed consent to clinical trials: The ELICIT Study. J Clin Epidemiol 2021; 137:14-22. [PMID: 33652081 PMCID: PMC8485845 DOI: 10.1016/j.jclinepi.2021.02.020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 02/17/2021] [Accepted: 02/24/2021] [Indexed: 11/25/2022]
Abstract
First internationally agreed minimum set of outcomes deemed essential to be measured in all future studies evaluating interventions to improve decisions about participating in an randomized controlled trial. Broad stakeholder involvement, including; potential trial participants (e.g., patients or others who could provide a lay perspective), trialists, research nurses, social scientists, clinicians, bioethicists, and research ethics committee members. Represents outcomes that are of core importance to multiple stakeholders and, if adopted, will improve the relevance of future trials in this field.
Objective To develop a core outcome set for the evaluation of interventions that aim to improve how people make decisions about whether to participate in randomized controlled trials (of healthcare interventions), the ELICIT Study. Study Design International mixed-method study involving a systematic review of existing outcomes, semi-structured interviews, an online Delphi survey, and a face-to-face consensus meeting. Results The literature review and stakeholder interviews (n = 25) initially identified 1045 reported outcomes that were grouped into 40 individually distinct outcomes. These 40 outcomes were scored for importance in two rounds of an online Delphi survey (n = 79), with 18 people attending the consensus meeting. Consensus was reached on 12 core outcomes: therapeutic misconception; comfort with decision; authenticity of decision; communication about the trial; empowerment; sense of altruism; equipoise; knowledge; salience of questions; understanding, how helpful the process was for decision making; and trial attrition. Conclusion The ELICIT core outcome set is the first internationally agreed minimum set of outcomes deemed essential to be measured in all future studies evaluating interventions to improve decisions about participating in an randomized controlled trial. Use of the ELICIT core set will ensure that results from these trials are comparable and relevant to all stakeholders. Registration COMET database - http://www.comet-initiative.org/Studies/Details/595.
Collapse
|
12
|
Recommendations from long-term care reports, commissions, and inquiries in Canada. F1000Res 2021; 10:87. [PMID: 34631013 PMCID: PMC8474099 DOI: 10.12688/f1000research.43282.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/14/2021] [Indexed: 04/04/2024] Open
Abstract
Background: Multiple long-term care (LTC) reports have issued similar recommendations for improvement across Canadian LTC homes. Our primary objective was to identify the most common recommendations made over the past 10 years. Our secondary objective was to estimate the total cost of studying LTC issues repeatedly from 1998 to 2020. Methods: The qualitative and cost analyses were conducted in Canada from July to October 2020. Using a list of reports, inquiries and commissions from The Royal Society of Canada Working Group on Long-Term Care, we coded recurrent recommendations in LTC reports. We contacted the sponsoring organizations for a cost estimate, including direct and indirect costs. All costs were adjusted to 2020 Canadian dollar values. Results: Of the 80 Canadian LTC reports spanning the years of 1998 to 2020, 24 (30%) were based on a national level and 56 (70%) were focused on provinces or municipalities. Report length ranged from 4 to 1491 pages and the median number of contributors was 14 (interquartile range, IQR, 5-26) per report. The most common recommendation was to increase funding to LTC to improve staffing, direct care and capacity (67% of reports). A median of 8 (IQR 3.25-18) recommendations were made per report. The total cost for all 80 reports was estimated to be $23,626,442.78. Conclusions: Problems in Canadian LTC homes and their solutions have been known for decades. Despite this, governments and non-governmental agencies continue to produce more reports at a monetary and societal cost to Canadians.
Collapse
|
13
|
Abstract
Background: Multiple long-term care (LTC) reports have issued similar recommendations for improvement across Canadian LTC homes. Our primary objective was to identify the most common recommendations made over the past 10 years. Our secondary objective was to estimate the total cost of studying LTC issues repeatedly from 1998 to 2020. Methods: The qualitative and cost analyses were conducted in Canada from July to October 2020. Using a list of reports, inquiries and commissions from The Royal Society of Canada Working Group on Long-Term Care, we coded recurrent recommendations in LTC reports. We contacted the sponsoring organizations for a cost estimate, including direct and indirect costs. All costs were adjusted to 2020 Canadian dollar values. Results: Of the 80 Canadian LTC reports spanning the years of 1998 to 2020, 24 (30%) were based on a national level and 56 (70%) were focused on provinces or municipalities. Report length ranged from 4 to 1491 pages and the median number of contributors was 14 (interquartile range, IQR, 5-26) per report. The most common recommendation was to increase funding to LTC to improve staffing, direct care and capacity (67% of reports). A median of 8 (IQR 3.25-18) recommendations were made per report. The total cost for all 80 reports was estimated to be $23,626,442.78. Conclusions: Problems in Canadian LTC homes and their solutions have been known for decades. Despite this, governments and non-governmental agencies continue to produce more reports at a monetary and societal cost to Canadians.
Collapse
|
14
|
Recommendations from long-term care reports, commissions, and inquiries in Canada. F1000Res 2021; 10:87. [PMID: 34631013 PMCID: PMC8474099 DOI: 10.12688/f1000research.43282.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/02/2021] [Indexed: 11/20/2022] Open
Abstract
Background: Multiple long-term care (LTC) reports have issued similar recommendations for improvement across Canadian LTC homes. Our primary objective was to identify the most common recommendations made over the past 10 years. Our secondary objective was to estimate the total cost of studying LTC issues repeatedly from 1998 to 2020. Methods: The qualitative and cost analyses were conducted in Canada from July to October 2020. Using a list of reports, inquiries and commissions from The Royal Society of Canada Working Group on Long-Term Care, we coded recurrent recommendations in LTC reports. We contacted the sponsoring organizations for a cost estimate, including direct and indirect costs. All costs were adjusted to 2020 Canadian dollar values. Results: Of the 80 Canadian LTC reports spanning the years of 1998 to 2020, 24 (30%) were based on a national level and 56 (70%) were focused on provinces or municipalities. Report length ranged from 4 to 1491 pages and the median number of contributors was 14 (interquartile range, IQR, 5-26) per report. The most common recommendation was to increase funding to LTC to improve staffing, direct care and capacity (67% of reports). A median of 8 (IQR 3.25-18) recommendations were made per report. The total cost for all 80 reports was estimated to be $23,626,442.78. Conclusions: Problems in Canadian LTC homes and their solutions have been known for decades. Despite this, governments and non-governmental agencies continue to produce more reports at a monetary and societal cost to Canadians.
Collapse
|
15
|
Evaluation of articles in metabolism research on the basis of their citations. Biochem Med (Zagreb) 2020; 31:010201. [PMID: 33380884 PMCID: PMC7745158 DOI: 10.11613/bm.2021.010201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 10/15/2020] [Indexed: 11/01/2022] Open
Abstract
Introduction The number of research papers and journals each year is increasing and millions of dollars are spent. Despite this there is evidence to suggest that many publications do not impact clinical practice. We used citation analysis to measure the influence of metabolism publications from 2003-2013. Those papers with lower citation rates are likely to be of the least value and high rates of such publications may be a marker of research waste. Materials and methods We analysed 67 journals with 81,954 articles related to metabolism indexed on the Scopus station database from 2003-2013. We identified those articles with less than 5 citations within 5 years from publication date as poorly cited. Journals were ranked by the percentage of articles that were poorly cited or uncited. Results Over the 10-year period, the number of total articles increased by 127%. We found that 24% of articles were poorly cited within 5 years of publication. Journals in the bottom 25% and top 25% of rankings by citation rates accounted for a similar proportion of poorly cited articles. Most of the open access journals were ranked in the top 25% for citation rates. Conclusions Our analysis contradicts concerns over increasing amounts of publications with little impact. The proportion of poorly cited articles are low, with little change in the trend over 10 years. The top and bottom ranked journals produced similar proportions of poorly cited articles. These findings suggest the necessity of pursuing further research to study waste in metabolism research.
Collapse
|
16
|
Tracing open data in emergencies: The case of the COVID-19 pandemic. Eur J Clin Invest 2020; 50:e13323. [PMID: 32558931 PMCID: PMC7323033 DOI: 10.1111/eci.13323] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 05/30/2020] [Accepted: 06/10/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) pandemic constitutes an ongoing, burning Public Health Emergency of International Concern (PHEIC). In 2015, the World Health Organization adopted an open data policy recommendation in such situations. OBJECTIVES The present cross-sectional meta-research study aimed to assess the availability of open data and metrics of articles pertaining to the COVID-19 outbreak in five high-impact journals. METHODS All articles regarding the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), published in five high-impact journals (Ann Intern Med, BMJ, JAMA, NEJM and Lancet) until March 14, 2020 were retrieved. Metadata (namely the type of article, number of authors, number of patients, citations, errata, news and social media mentions) were extracted for each article in each journal in a systematic way. Google Scholar and Scopus were used for citations and author metrics respectively, and Altmetrics and PlumX were used for news and social media mentions retrieval. The degree of adherence to the PHEIC open data call was also evaluated. RESULTS A total of 140 articles were published until March 14, 2020, mostly opinion papers. Sixteen errata followed these publications. The number of authors in each article ranged from 1 to 63, whereas the number of patients with a laboratory-confirmed SARS-CoV-2 infection reached 2645. Extensive hyperauthorship was evident among case studies. The impact of these publications reached a total of 4210 cumulative crude citations and 342 790 news and social media mentions. Only one publication (0.7%) provided complete open data, while 32 (22.9%) included patient data. CONCLUSIONS Even though a large number of manuscripts was produced since the pandemic, availability of open data remains restricted.
Collapse
|
17
|
Increasing value and reducing research waste in obstetrics: towards woman-centered research. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2020; 55:151-156. [PMID: 30980569 DOI: 10.1002/uog.20294] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 03/24/2019] [Accepted: 04/05/2019] [Indexed: 06/09/2023]
|
18
|
Abstract
A surprisingly huge proportion of medical research still shows poor quality in design, conduct and analysis, leading to far from optimal robustness of findings and validity of conclusions. Research waste remains a problem caused by a number of reasons. Asking the wrong research questions and ignoring the existing evidence are possible preventable ones. Evidence maps are tools that may aid in guiding clinical investigators and help in agenda setting of future research. In this article, we explain how they serve such a goal and outline the steps required to build effective evidence maps.
Collapse
|
19
|
Abstract
Studies accumulate over time and meta-analyses are mainly retrospective. These two characteristics introduce dependencies between the
analysis time, at which a series of studies is up for meta-analysis, and results within the series. Dependencies introduce bias
—Accumulation Bias— and invalidate the sampling distribution assumed for p-value tests, thus inflating type-I errors. But dependencies are also inevitable, since for science to accumulate efficiently, new research needs to be informed by past results. Here, we investigate various ways in which
time influences error control in meta-analysis testing. We introduce an
Accumulation Bias Framework that allows us to model a wide variety of practically occurring dependencies including study series accumulation, meta-analysis timing, and approaches to multiple testing in living systematic reviews. The strength of this framework is that it shows how all dependencies affect p-value-based tests in a similar manner. This leads to two main conclusions. First, Accumulation Bias is inevitable, and even if it can be approximated and accounted for, no valid p-value tests can be constructed. Second, tests based on likelihood ratios withstand Accumulation Bias: they provide bounds on error probabilities that remain valid despite the bias. We leave the reader with a choice between two proposals to consider
time in error control: either treat individual (primary) studies and meta-analyses as two separate worlds
— each with their own timing
— or integrate individual studies in the meta-analysis world. Taking up likelihood ratios in either approach allows for valid tests that relate well to the accumulating nature of scientific knowledge. Likelihood ratios can be interpreted as betting profits, earned in previous studies and invested in new ones, while the meta-analyst is allowed to cash out at any time and advice against future studies.
Collapse
|
20
|
Variation in outcome reporting in randomized controlled trials of interventions for prevention and treatment of fetal growth restriction. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2019; 53:598-608. [PMID: 30523658 DOI: 10.1002/uog.20189] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 11/13/2018] [Accepted: 11/22/2018] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Although fetal growth restriction (FGR) is well known to be associated with adverse outcomes for the mother and offspring, effective interventions for the management of FGR are yet to be established. Trials reporting interventions for the prevention and treatment of FGR may be limited by heterogeneity in the underlying pathophysiology. The aim of this study was to conduct a systematic review of outcomes reported in randomized controlled trials (RCTs) assessing interventions for the prevention or treatment of FGR, in order to identify and categorize the variation in outcome reporting. METHODS MEDLINE, EMBASE and The Cochrane Library were searched from inception until August 2018 for RCTs investigating therapies for the prevention and treatment of FGR. Studies were assessed systematically and data on outcomes that were reported in the included studies were extracted and categorized. The methodological quality of the included studies was assessed using the Jadad score. RESULTS The search identified 2609 citations, of which 153 were selected for full-text review and 72 studies (68 trials) were included in the final analysis. There were 44 trials relating to the prevention of FGR and 24 trials investigating interventions for the treatment of FGR. The mean Jadad score of all studies was 3.07, and only nine of them received a score of 5. We identified 238 outcomes across the included studies. The most commonly reported were birth weight (88.2%), gestational age at birth (72.1%) and small-for-gestational age (67.6%). Few studies reported on any measure of neonatal morbidity (27.9%), while adverse effects of the interventions were reported in only 17.6% of trials. CONCLUSIONS There is significant variation in outcome reporting across RCTs of therapies for the prevention and treatment of FGR. The clinical applicability of future research would be enhanced by the development of a core outcome set for use in future trials. Copyright © 2018 ISUOG. Published by John Wiley & Sons Ltd.
Collapse
|
21
|
Tackling poorly selected, collected, and reported outcomes in obstetrics and gynecology research. Am J Obstet Gynecol 2019; 220:71.e1-71.e4. [PMID: 30273584 DOI: 10.1016/j.ajog.2018.09.023] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 09/01/2018] [Accepted: 09/21/2018] [Indexed: 11/18/2022]
Abstract
Clinical research should ultimately improve patient care. To enable this, randomized controlled trials must select, collect, and report outcomes that are both relevant to clinical practice and genuinely reflect the perspectives of key stakeholders including health care professionals, researchers, and patients. Unfortunately, many randomized controlled trials fall short of this requirement. Complex issues, including a failure to take into account the perspectives of key stakeholders when selecting outcomes, variations in outcome definitions and measurement instruments, and outcome reporting bias make research evidence difficult to interpret, undermining the translation of research into clinical practice. Problems with poor outcome selection, measurement, and reporting can be addressed by developing, disseminating, and implementing core outcome sets. A core outcome set represents a minimum data set of outcomes developed using robust consensus science methods engaging diverse stakeholders including health care professionals, researchers, and patients. Core outcomes should be routinely utilized by researchers, collected in a standardized manner, and reported consistently in the final publication. They are currently being developed across our specialty including infertility, endometriosis, and preeclampsia. Recognizing poorly selected, collected, and reported outcomes as serious hindrances to progress in our specialty, more than 80 journals including the Journal, have come together to support the Core Outcomes in Women's and Newborn Health (CROWN) initiative. The consortium supports researchers to develop, disseminate, and implement core outcome sets. Implementing core outcome sets could make a profound contribution to addressing poorly selected, collected, and reported outcomes. Implementation should ensure future randomized controlled trials hold the necessary reach and relevance to inform clinical practice, enhance patient care, and improve patient outcomes.
Collapse
|
22
|
Abstract
BACKGROUND We conducted a study of recommendations from the American Academy of Orthopaedic Surgeons (AAOS) guideline, "Optimizing the Management of Rotator Cuff Problems." Using these recommendations, we conducted searches of clinical trial registries and bibliographic databases to note the extent to which new research has been undertaken to address areas of deficiency. HYPOTHESIS Newly conducted research regarding rotator cuff repair and injury is available that will fill knowledge gaps identified by the AAOS guideline. STUDY DESIGN Cross-sectional study. METHODS For each recommendation in the AAOS guideline, we created PICO (participants, intervention, comparator, outcome) questions and search strings. Searches were conducted of ClinicalTrials.gov, the World Health Organization's International Clinical Trials Registry Platform, MEDLINE via PubMed, and EMBASE to locate studies undertaken after the final literature search performed by the AAOS work group. RESULTS We located 210 newly registered trials and 448 published studies that are relevant to the recommendations made in the rotator cuff guideline. The majority of the recommendations have been addressed by relevant registered trials or published studies. Of the 448 published studies, 185 directly addressed the guideline recommendations. Additionally, 71% of the 185 published studies directly addressing the recommendations were randomized trials or systematic reviews/meta-analyses. The most important finding of our study was that the recommendations in the AAOS rotator cuff guideline have been adequately addressed. CONCLUSION Orthopaedic researchers have adequately addressed knowledge gaps regarding rotator cuff repair treatment and management options. As such, the AAOS may consider a guideline update to ensure that recommendations reflect current findings in orthopaedic literature.
Collapse
|
23
|
Using HTA and guideline development as a tool for research priority setting the NICE way: reducing research waste by identifying the right research to fund. BMJ Open 2018. [PMID: 29523564 PMCID: PMC5855177 DOI: 10.1136/bmjopen-2017-019777] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND The National Institute for Health and Care Excellence (NICE) was established in 1999 and provides national guidance and advice to improve health and social care. Several steps in the research cycle have been identified that can support the reduction of waste that occurs in biomedical research. The first step in the process is ensuring appropriate research priority setting occurs so only the questions that are needed to fill existing gaps in the evidence are funded. This paper summarises the research priority setting processes at NICE. METHODS NICE uses its guidance production processes to identify and prioritise research questions through systematic reviews, economic analyses and stakeholder consultations and then highlights those priorities by engagement with the research community. NICE also highlights its methodological areas for research to ensure the appropriate development and growth of the evidence landscape. RESULTS NICE has prioritised research questions through its guidance production and methodological work and has successfully had several research products funded through the National Institute for Health Research and Medical Research Council. This paper summarises those activities and results. CONCLUSIONS This activity of NICE therefore reduces research waste by ensuring that the research it recommends has been systematically prioritised through evidence reviews and stakeholder input.
Collapse
|
24
|
Reporting guidelines for oncology research: helping to maximise the impact of your research. Br J Cancer 2018; 118:619-628. [PMID: 29471308 PMCID: PMC5846057 DOI: 10.1038/bjc.2017.407] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 09/21/2017] [Accepted: 09/22/2017] [Indexed: 12/13/2022] Open
Abstract
Many reports of health research omit important information needed to assess their methodological robustness and clinical relevance. Without clear and complete reporting, it is not possible to identify flaws or biases, reproduce successful interventions, or use the findings in systematic reviews or meta-analyses. The EQUATOR Network (http://www.equator-network.org/) promotes responsible reporting and the use of reporting guidelines to improve the accuracy, completeness, and transparency of health research. EQUATOR supports researchers by providing online resources and training. EQUATOR Oncology, a project funded by Cancer Research UK, aims to support cancer researchers reporting their research through the provision of online resources. In this article, our objective is to highlight reporting issues related to oncology research publications and to introduce reporting guidelines that are designed to aid high-quality reporting. We describe generic reporting guidelines for the main study types, and explain how these guidelines should and should not be used. We also describe 37 oncology-specific reporting guidelines, covering different clinical areas (e.g., haematology or urology) and sections of the report (e.g., methods or study characteristics); most of these are little-used. We also provide some background information on EQUATOR Oncology, which focuses on addressing the reporting needs of the oncology research community.
Collapse
|
25
|
Avoidable Waste in Ophthalmic Epidemiology: A Review of Blindness Prevalence Surveys in Low and Middle Income Countries 2000-2014. Ophthalmic Epidemiol 2017; 25:13-20. [PMID: 28886260 DOI: 10.1080/09286586.2017.1328067] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
PURPOSE Sources of avoidable waste in ophthalmic epidemiology include duplication of effort, and survey reports remaining unpublished, gaining publication after a long delay, or being incomplete or of poor quality. The aim of this review was to assess these sources of avoidable waste by examining blindness prevalence surveys undertaken in low and middle income countries (LMICs) between 2000 and 2014. METHODS On December 1, 2016 we searched MEDLINE, EMBASE and Web of Science databases for cross-sectional blindness prevalence surveys undertaken in LMICs between 2000 and 2014. All surveys listed on the Rapid Assessment of Avoidable Blindness (RAAB) Repository website ("the Repository") were also considered. For each survey we assessed (1) availability of scientific publication, survey report, summary results tables and/or datasets; (2) time to publication from year of survey completion and journal attributes; (3) extent of blindness information reported; and (4) rigour when information was available from two sources (i.e. whether it matched). RESULTS Of the 279 included surveys (from 68 countries) 186 (67%) used RAAB methodology; 146 (52%) were published in a scientific journal, 57 (20%) were published in a journal and on the Repository, and 76 (27%) were on the Repository only (8% had tables; 19% had no information available beyond registration). Datasets were available for 50 RAABs (18% of included surveys). Time to publication ranged from <1 to 11 years (mean, standard deviation 2.8 ± 1.8 years). The extent of blindness information reported within studies varied (e.g. presenting and best-corrected, unilateral and bilateral); those with both a published report and Repository tables were most complete. For surveys published and with RAAB tables available, discrepancies were found in reporting of participant numbers (14% of studies) and blindness prevalence (15%). CONCLUSION Strategies are needed to improve the availability, consistency, and quality of information reported from blindness prevalence surveys, and hence reduce avoidable waste.
Collapse
|
26
|
Discontinuation and non-publication of neurodegenerative disease trials: a cross-sectional analysis. Eur J Neurol 2017. [PMID: 28636179 DOI: 10.1111/ene.13336] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
BACKGROUND AND PURPOSE Trial discontinuation and non-publication represent major sources of research waste in clinical medicine. No previous studies have investigated non-dissemination bias in clinical trials of neurodegenerative diseases. METHODS ClinicalTrials.gov was searched for all randomized, interventional, phase II-IV trials that were registered between 1 January 2000 and 31 December 2009 and included adults with Alzheimer's disease, motor neurone disease, multiple sclerosis or Parkinson's disease. Publications from these trials were identified by extensive online searching and contact with authors, and multiple logistic regression analysis was performed to identify characteristics associated with trial discontinuation and non-publication. RESULTS In all, 362 eligible trials were identified, of which 12% (42/362) were discontinued. 28% (91/320) of completed trials remained unpublished after 5 years. Trial discontinuation was independently associated with number of patients (P = 0.015; more likely in trials with ≤100 patients; odds ratio 2.65, 95% confidence interval 1.21-5.78) and phase of trial (P = 0.009; more likely in phase IV than phase III trials; odds ratio 3.90, 95% confidence interval 1.41-10.83). Trial non-publication was independently associated with blinding status (P = 0.005; more likely in single-blind than double-blind trials; odds ratio 5.63, 95% confidence interval 1.70-18.71), number of centres (P = 0.010; more likely in single-centre than multi-centre trials; odds ratio 2.49, 95% confidence interval 1.25-4.99), phase of trial (P = 0.041; more likely in phase II than phase IV trials; odds ratio 2.88, 95% confidence interval 1.04-7.93) and sponsor category (P = 0.001; more likely in industry-sponsored than university-sponsored trials; odds ratio 5.05, 95% confidence interval 1.87-13.63). CONCLUSIONS There is evidence of non-dissemination bias in randomized trials of interventions for neurodegenerative diseases. Associations with trial discontinuation and non-publication were similar to findings in other diseases. These biases may distort the therapeutic information available to inform clinical practice.
Collapse
|
27
|
The RECORD reporting guidelines: meeting the methodological and ethical demands of transparency in research using routinely-collected health data. Clin Epidemiol 2016; 8:389-392. [PMID: 27799820 PMCID: PMC5076545 DOI: 10.2147/clep.s110528] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Routinely-collected health data (RCD) are now used for a wide range of studies, including observational studies, comparative effectiveness research, diagnostics, studies of adverse effects, and predictive analytics. At the same time, limitations inherent in using data collected without specific a priori research questions are increasingly recognized. There is also a growing awareness of the suboptimal quality of reports presenting research based on RCD. This has created a perfect storm of increased interest and use of RCD for research, together with inadequate reporting of the strengths and weaknesses of these data resources. The REporting of studies Conducted using Observational Routinely-collected Data (RECORD) statement was developed to address these limitations and to help researchers using RCD to meet their ethical obligations of complete and accurate reporting, as well as improve the utility of research conducted using RCD. The RECORD statement has been endorsed by more than 15 journals, including Clinical Epidemiology. This journal now recommends that authors submit the RECORD checklist together with any manuscript reporting on research using RCD.
Collapse
|
28
|
The forecast for future clinical trials and clinical trialists-Storms or sunshine? Int J Stroke 2016; 11:738-40. [PMID: 27316456 DOI: 10.1177/1747493016655362] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 03/13/2016] [Indexed: 11/17/2022]
Abstract
Randomized controlled trials are the most unbiased way to evaluate many types of healthcare interventions. Pharmaceutical and medical technology industries play an important role in developing and testing new interventions that have commercial potential. However, many interventions for the prevention, treatment and rehabilitation of stroke are either not drugs or devices or have no commercial potential. Like many other clinicians who are uncertain about the value of existing or new treatments, we are involved in investigator-led clinical trials to resolve treatment uncertainties. There is common agreement that investigator-led clinical trials are facing increasing difficulties and that as a result clinicians may be deterred from pursuing clinical trials as a research career. In this article, we express our concerns for the future of such trials, balanced with the hope that systems to foster and sustain this important type of research in the future can be developed.
Collapse
|
29
|
Better research reporting to improve the utility of routine data for making better treatment decisions. J Comp Eff Res 2016; 5:117-22. [PMID: 26930118 DOI: 10.2217/cer.15.66] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The availability of routinely collected health data, such as health administrative data, electronic health records, prescription records and disease registries, has increased in the information age. This has led to an explosion of reports of comparativeness effectiveness research using such data. Guidelines for the REporting of studies Conducted using Observational Routinely-collected Data (RECORD) will improve the completeness and transparency of reporting of research using routinely collected health data. The Journal of Comparative Effectiveness Research has endorsed these guidelines. In this commentary, the RECORD checklist is reprinted and members of the RECORD working committee reflect on the importance of these reporting guidelines for the field of comparative effectiveness research.
Collapse
|
30
|
Improving the relevance and consistency of outcomes in comparative effectiveness research. J Comp Eff Res 2016; 5:193-205. [PMID: 26930385 PMCID: PMC4926524 DOI: 10.2217/cer-2015-0007] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2015] [Accepted: 01/07/2016] [Indexed: 01/15/2023] Open
Abstract
Policy makers have clearly indicated--through heavy investment in the Patient Centered Outcomes Research Institute--that reporting outcomes that are meaningful to patients is crucial for improvement in healthcare delivery and cost reduction. Better interpretation and generalizability of clinical research results that incorporate patient-centered outcomes research can be achieved by accelerating the development and uptake of core outcome sets (COS). COS provide a standardized minimum set of the outcomes that should be measured and reported in all clinical trials of a specific condition. The level of activity around COS has increased significantly over the past decade, with substantial progress in several clinical domains. However, there are many important clinical conditions for which high-quality COS have not been developed and there are limited resources and capacity with which to develop them. We believe that meaningful progress toward the goals behind the significant investments in patient-centered outcomes research and comparative effectiveness research will depend on a serious effort to address these issues.
Collapse
|
31
|
Making randomised trials more efficient: report of the first meeting to discuss the Trial Forge platform. Trials 2015; 16:261. [PMID: 26044814 PMCID: PMC4475334 DOI: 10.1186/s13063-015-0776-0] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2015] [Accepted: 05/21/2015] [Indexed: 11/17/2022] Open
Abstract
Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge ( www.trialforge.org ) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants' views on the processes in the life of a randomised trial that should be covered by Trial Forge.General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a 'go to' website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.
Collapse
|
32
|
Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination. BMC Med 2015; 13:82. [PMID: 25889502 PMCID: PMC4411715 DOI: 10.1186/s12916-015-0317-4] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2014] [Accepted: 03/10/2015] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND As the number of systematic reviews is growing rapidly, we systematically investigate whether meta-analyses published in leading medical journals present an outline of available evidence by referring to previous meta-analyses and systematic reviews. METHODS We searched PubMed for recent meta-analyses of pharmacological treatments published in high impact factor journals. Previous systematic reviews and meta-analyses were identified with electronic searches of keywords and by searching reference sections. We analyzed the number of meta-analyses and systematic reviews that were cited, described and discussed in each recent meta-analysis. Moreover, we investigated publication characteristics that potentially influence the referencing practices. RESULTS We identified 52 recent meta-analyses and 242 previous meta-analyses on the same topics. Of these, 66% of identified previous meta-analyses were cited, 36% described, and only 20% discussed by recent meta-analyses. The probability of citing a previous meta-analysis was positively associated with its publication in a journal with a higher impact factor (odds ratio, 1.49; 95% confidence interval, 1.06 to 2.10) and more recent publication year (odds ratio, 1.19; 95% confidence interval 1.03 to 1.37). Additionally, the probability of a previous study being described by the recent meta-analysis was inversely associated with the concordance of results (odds ratio, 0.38; 95% confidence interval, 0.17 to 0.88), and the probability of being discussed was increased for previous studies that employed meta-analytic methods (odds ratio, 32.36; 95% confidence interval, 2.00 to 522.85). CONCLUSIONS Meta-analyses on pharmacological treatments do not consistently refer to and discuss findings of previous meta-analyses on the same topic. Such neglect can lead to research waste and be confusing for readers. Journals should make the discussion of related meta-analyses mandatory.
Collapse
|
33
|
Publication rate for funded studies from a major UK health research funder: a cohort study. BMJ Open 2013; 3:e002521. [PMID: 23645914 PMCID: PMC3646183 DOI: 10.1136/bmjopen-2012-002521] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2013] [Revised: 04/09/2013] [Accepted: 04/09/2013] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES This study aimed to investigate what percentage of National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme-funded projects have published their final reports in the programme's journal HTA and to explore reasons for non-publication. DESIGN Retrospective cohort study. SETTING Failure to publish findings from research is a significant area of research waste. It has previously been suggested that potentially over 50% of studies funded are never published. PARTICIPANTS All NIHR HTA projects with a planned submission date for their final report for publication in the journal series on or before 9 December 2011 were included. PRIMARY AND SECONDARY OUTCOME MEASURES The projects were classified according to the type of research, whether they had been published or not; if not yet published, whether they would be published in the future or not. The reasons for non-publication were investigated. RESULTS 628 projects were included: 582 (92.7%) had published a monograph; 19 (3%) were expected to publish a monograph; 13 (2.1%) were discontinued studies and would not publish; 12 (1.9%) submitted a report which did not lead to a publication as a monograph; and two (0.3%) did not submit a report. Overall, 95.7% of HTA studies either have published or will publish a monograph: 94% for those commissioned in 2002 or before and 98% for those commissioned after 2002. Of the 27 projects for which there will be no report, the majority (21) were commissioned in 2002 or before. Reasons why projects failed to complete included failure to recruit; issues concerning the organisation where the research was taking place; drug licensing issues; staffing issues; and access to data. CONCLUSIONS The percentage of HTA projects for which a monograph is published is high. The advantages of funding organisations requiring publication in their own journal include avoidance of publication bias and research waste.
Collapse
|