1
|
Yang B, Jia Z. Diagnostic value of nocturnal trend changes in a dynamic electrocardiogram for coronary heart disease. BMC Cardiovasc Disord 2024; 24:561. [PMID: 39407107 PMCID: PMC11481414 DOI: 10.1186/s12872-024-04213-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 09/19/2024] [Indexed: 10/19/2024] Open
Abstract
OBJECTIVE To explore the diagnostic value of intermittent changes in the nocturnal ST segment trend graph in a dynamic electrocardiogram (ECG) for coronary heart disease (CHD). METHODS A total of 205 patients who underwent coronary angiography were included in this retrospective study. The study sample was determined through a power analysis aimed at achieving power of 80% with a significance level of 0.05. The participants were divided into the CHD (n = 101) and the non-CHD (n = 104) group, based on the degree of coronary artery diameter stenosis. The morphological changes in the ST segment trend graph were observed and divided into two categories: 'wall-shaped' and 'peak-shaped' changes. RESULTS Among the 205 patients, 94 had nocturnal ST segment dynamic changes and 111 did not. The detection rate of CHD without nocturnal ST segment dynamic changes was 21.59%, significantly lower than the detection rate of 93.18% in those with nocturnal ST segment changes, reflecting a statistically significant difference (P < 0.05). The positive rate of ST segment in patients with single-vessel disease (71.88%) was lower than in patients with multi-vessel disease (78.57%), and both differences were statistically significant (P < 0.05). The duration of ST segment trend graph changes in 94 cases in the CHD group with intermittent changes in the nocturnal ST segment trend graph was higher than in the non-CHD group, but no significant difference was observed (P > 0.05). The detection rate of CHD in the peak-shaped dynamic change group of the nocturnal ST segment trend graph was significantly higher (76/82) than in the wall-shaped (6/82) dynamic change group (P < 0.05). CONCLUSION Peak-shaped changes in the nocturnal ST segment trend graph indicate coronary artery lesions. Nocturnal ST segment changes observed through dynamic ECG monitoring can serve as a valuable non-invasive predictor for CHD, providing a feasible method for early diagnosis and intervention in clinical practice.
Collapse
Affiliation(s)
- Bing Yang
- Department of Electrocardiogram Room, Shanxi Provincial People's Hospital, No. 29, Shuangtasi Street, Taiyuan, 030012, China.
| | - Zhiyue Jia
- Department of Electrocardiogram Room, Shanxi Provincial People's Hospital, No. 29, Shuangtasi Street, Taiyuan, 030012, China
| |
Collapse
|
2
|
Wu C, Hao J, Xin Y, Song R, Li W, Zuo L, Zhang X, Cai Y, Wu H, Hui W. Poor sample size reporting quality and insufficient sample size in economic evaluations conducted alongside pragmatic trials: a cross-sectional survey. J Clin Epidemiol 2024; 176:111535. [PMID: 39307404 DOI: 10.1016/j.jclinepi.2024.111535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 09/02/2024] [Accepted: 09/16/2024] [Indexed: 10/25/2024]
Abstract
OBJECTIVES Economic evaluations based on well-designed and -conducted pragmatic randomized controlled trials (pRCTs) can provide valuable evidence on the cost-effectiveness of interventions, enhancing the relevance and applicability of findings to healthcare decision-making. However, economic evaluation outcomes are seldom taken into consideration during the process of sample size calculation in pragmatic trials. The reporting quality of sample size and information on its calculation in economic evaluations that are well-suited to pRCTs remain unknown. This study aims to assess the reporting quality of sample size and estimate the power values of economic evaluations in pRCTs. STUDY DESIGN AND SETTING We conducted a cross-sectional survey using data of pRCTs available from PubMed and OVID from 1 January 2010 to 24 April 2022. Two groups of independent reviewers identified articles; three groups of reviewers each extracted the data. Descriptive statistics presented the general characteristics of included studies. Statistical power analyses were performed on clinical and economic outcomes with sufficient data. RESULTS The electronic search identified 715 studies and 152 met the inclusion criteria. Of these, 26 were available for power analysis. Only 9 out of 152 trials (5.9%) considered economic outcomes when estimating sample size, and only one adjusted the sample size accordingly. Power values for trial-based economic evaluations and clinical trials ranged from 2.56% to 100% and 3.21%-100%, respectively. Regardless of the perspectives, in 14 out of the 26 studies (53.8%), the power values of economic evaluations for quality-adjusted life years (QALYs) were lower than those of clinical trials for primary endpoints (PEs). In 11 out of the 24 (45.8%) and in 8 out of the 13 (61.5%) studies, power values of economic evaluations for QALYs were lower than those of clinical trials for PEs from the healthcare and societal perspectives, respectively. Power values of economic evaluations for non-QALYs from the healthcare and societal perspectives were potentially higher than those of clinical trials in 3 out of the 4 studies (75%). The power values for economic outcomes in Q1 were not higher than those for other journal impact factor quartile categories. CONCLUSION Theoretically, pragmatic trials with concurrent economic evaluations can provide real-world evidence for healthcare decision makers. However, in pRCT-based economic evaluations, limited consideration, and inadequate reporting of sample-size calculations for economic outcomes could negatively affect the results' reliability and generalisability. We thus recommend that future pragmatic trials with economic evaluations should report how sample sizes are determined or adjusted based on the economic outcomes in their protocols to enhance their transparency and evidence quality.
Collapse
Affiliation(s)
- Changjin Wu
- School of Public Health, Chongqing Medical University, Chongqing, China
| | - Jun Hao
- Medical Research and Biometrics Centre, National Clinical Research Centre for Cardiovascular Diseases, Fuwai Hospital, National Centre for Cardiovascular Diseases, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China; Department of Clinical Sciences, Liverpool School of Tropical Medicine, Liverpool, UK; Institute for Global Health, University College London, London, UK
| | - Yu Xin
- Department of Science and Technology, West China Hospital, Sichuan University, Chengdu, China
| | - Ruomeng Song
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Wentan Li
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Ling Zuo
- Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University /West China School of Nursing, Sichuan University, Chengdu, China; Integrated Care Management Centre, Outpatient Department, West China Hospital, Sichuan University, Chengdu, China
| | - Xiyan Zhang
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Yuanyi Cai
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Huazhang Wu
- Department of Health Service Management, School of Health Management, China Medical University, Shenyang, China
| | - Wen Hui
- Department of Science and Technology, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
3
|
Gionfriddo MR, McClendon C, Nolfi DA, Kalarchian MA, Covvey JR. Back to the basics: Guidance for designing good literature searches. Res Social Adm Pharm 2024; 20:463-468. [PMID: 38272775 PMCID: PMC11149711 DOI: 10.1016/j.sapharm.2024.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 01/18/2024] [Indexed: 01/27/2024]
Abstract
The number of scientific publications is growing at an unprecedented rate. Failure to properly evaluate existing literature at the start of a project may result in a researcher wasting time and resources. As pharmacy researchers and scholars look to conceptualize new studies, it is imperative to begin with a high-quality literature review that reveals what is known and unknown about a given topic. The purpose of this commentary is to provide useful guidance on conducting rigorous searches of the literature that inform the design and execution of research. Guidance for less formal literature reviews can be adapted from best practices utilized within the formalized field of evidence synthesis. Additionally, researchers can draw on guidance from PRESS (Peer Review of Electronic Search Strategies) to engage in self-evaluation of their search strategies. Finally, developing an awareness of common pitfalls when designing literature searches can provide researchers with confidence that their research is designed to fill clearly articulated gaps in knowledge.
Collapse
Affiliation(s)
| | | | - David A Nolfi
- Duquesne University Gumberg Library, Pittsburgh, PA, USA
| | | | - Jordan R Covvey
- Duquesne University School of Pharmacy, Pittsburgh, PA, USA.
| |
Collapse
|
4
|
Kounatidou NE, Tzavara C, Palioura S. Systematic review of sample size calculations and reporting in randomized controlled trials in ophthalmology over a 20-year period. Int Ophthalmol 2023; 43:2999-3010. [PMID: 36917324 DOI: 10.1007/s10792-023-02687-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 03/04/2023] [Indexed: 03/16/2023]
Abstract
PURPOSE Randomized Controlled Trials (RCTs) are considered the gold standard for the practice of evidence-based medicine. The purpose of this study is to systematically assess the reporting of sample size calculations in ophthalmology RCTs in 5 leading journals over a 20-year period. Reviewing sample size calculations in ophthalmology RCTs will shed light on the methodological quality of RCTs and, by extension, on the validity of published results. METHODS The MEDLINE database was searched to identify full reports of RCTs in the journals Ophthalmology, JAMA Ophthalmology, American Journal of Ophthalmology, Investigative Ophthalmology and Visual Science, and British Journal of Ophthalmology between January and December of the years 2000, 2010 and 2020. Screening identified 559 articles out of which 289 met the inclusion criteria for this systematic review. Data regarding sample size calculation reporting and trial characteristics was extracted for each trial by independent investigators. RESULTS In 2020, 77.9% of the RCTs reported sample size calculations as compared with 37% in 2000 (p < 0.001) and 60.7% in 2010 (p = 0.012). Studies reporting all necessary parameters for sample size recalculation increased significantly from 17.2% in 2000 to 39.3% in 2010 and 43.0% in 2020 (p < 0.001). Reporting of funding was greater in 2020 (98.8%) compared with 2010 (89.3%) and 2000 (53.1%). Registration in a clinical trials database occurred more frequently in 2020 (94.2%) compared to 2000 (1.2%; p < 0.001) and 2010 (68%; p < 0.001). In 2020, 38.4% of studies reported different sample sizes in the online registry from the published article. Overall, the most studied area in 2000 was glaucoma (29.6% of RCTs), whereas in 2010 and 2020, it was retina (40.2 and 37.2% of the RCTs, respectively). The number of patients enrolled in a study and the number of eyes studied was significantly greater in 2020 compared to 2000 and 2010 (p < 0.001). CONCLUSION Sample size calculation reporting in ophthalmology RCTs has improved significantly between the years 2000 and 2020 and is comparable to other fields in medicine. However, reporting of certain parameters remains inconsistent with current publication guidelines.
Collapse
Affiliation(s)
| | - Chara Tzavara
- Department of Biostatistics, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Sotiria Palioura
- Department of Ophthalmology, University of Cyprus Medical School, Aglantzia, Cyprus.
| |
Collapse
|
5
|
Marley-Zagar E, White IR, Royston P, Barthel FMS, Parmar MKB, Babiker AG. artbin: Extended sample size for randomized trials with binary outcomes. THE STATA JOURNAL 2023; 23:24-52. [PMID: 37461744 PMCID: PMC7614770 DOI: 10.1177/1536867x231161971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
We describe the command artbin, which offers various new facilities for the calculation of sample size for binary outcome variables that are not otherwise available in Stata. While artbin has been available since 2004, it has not been previously described in the Stata Journal. artbin has been recently updated to include new options for different statistical tests, methods and study designs, improved syntax, and better handling of noninferiority trials. In this article, we describe the updated version of artbin and detail the various formulas used within artbin in different settings.
Collapse
Affiliation(s)
| | - Ian R. White
- MRC Clinical Trials Unit University College London London, U.K
| | - Patrick Royston
- MRC Clinical Trials Unit University College London London, U.K
| | | | | | | |
Collapse
|
6
|
Xavier-Santos D, Scharlack NK, Pena FDL, Antunes AEC. Effects of Lacticaseibacillus rhamnosus GG supplementation, via food and non-food matrices, on children’s health promotion: A scoping review. Food Res Int 2022; 158:111518. [DOI: 10.1016/j.foodres.2022.111518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 06/11/2022] [Accepted: 06/13/2022] [Indexed: 11/04/2022]
|
7
|
Sample size justifications in Gait & Posture. Gait Posture 2022; 92:333-337. [PMID: 34920357 DOI: 10.1016/j.gaitpost.2021.12.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 11/17/2021] [Accepted: 12/03/2021] [Indexed: 02/08/2023]
Abstract
BACKGROUND Context regarding how researchers determine the sample size of their experiments is important for interpreting the results and determining their value and meaning. Between 2018 and 2019, the journal Gait & Posture introduced a requirement for sample size justification in their author guidelines. RESEARCH QUESTION How frequently and in what ways are sample sizes justified in Gait & Posture research articles and was the inclusion of a guideline requiring sample size justification associated with a change in practice? METHODS The guideline was not in place prior to May 2018 and was in place from 25th July 2019. All articles in the three most recent volumes of the journal (84-86) and the three most recent, pre-guideline volumes (60-62) at time of preregistration were included in this analysis. This provided an initial sample of 324 articles (176 pre-guideline and 148 post-guideline). Articles were screened by two authors to extract author data, article metadata and sample size justification data. Specifically, screeners identified if (yes or no) and how sample sizes were justified. Six potential justification types (Measure Entire Population, Resource Constraints, Accuracy, A priori Power Analysis, Heuristics, No Justification) and an additional option of Other/Unsure/Unclear were used. RESULTS In most cases, authors of Gait & Posture articles did not provide a justification for their study's sample size. The inclusion of the guideline was associated with a modest increase in the percentage of articles providing a justification (16.6-28.1%). A priori power calculations were the dominant type of justification, but many were not reported in enough detail to allow replication. SIGNIFICANCE Gait & Posture researchers should be more transparent in how they determine their sample sizes and carefully consider if they are suitable. Editors and journals may consider adding a similar guideline as a low-resource way to improve sample size justification reporting.
Collapse
|
8
|
Gosling CJ, Cartigny A, Mellier BC, Solanes A, Radua J, Delorme R. Efficacy of psychosocial interventions for Autism spectrum disorder: an umbrella review. Mol Psychiatry 2022; 27:3647-3656. [PMID: 35790873 PMCID: PMC9708596 DOI: 10.1038/s41380-022-01670-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 05/31/2022] [Accepted: 06/09/2022] [Indexed: 02/08/2023]
Abstract
INTRODUCTION The wide range of psychosocial interventions designed to assist people with Autism Spectrum Disorder (ASD) makes it challenging to compile and hierarchize the scientific evidence that supports the efficacy of these interventions. Thus, we performed an umbrella review of published meta-analyses of controlled clinical trials that investigated the efficacy of psychosocial interventions on both core and related ASD symptoms. METHODS Each meta-analysis that was identified was re-estimated using a random-effects model with a restricted maximum likelihood estimator. The methodological quality of included meta-analyses was critically appraised and the credibility of the evidence was assessed algorithmically according to criteria adapted for the purpose of this study. RESULTS We identified a total of 128 meta-analyses derived from 44 reports. More than half of the non-overlapping meta-analyses were nominally statistically significant and/or displayed a moderate-to-large pooled effect size that favored the psychosocial interventions. The assessment of the credibility of evidence pointed out that the efficacy of early intensive behavioral interventions, developmental interventions, naturalistic developmental behavioral interventions, and parent-mediated interventions was supported by suggestive evidence on at least one outcome in preschool children. Possible outcomes included social communication deficits, global cognitive abilities, and adaptive behaviors. Results also revealed highly suggestive indications that parent-mediated interventions improved disruptive behaviors in early school-aged children. The efficacy of social skills groups was supported by suggestive evidence for improving social communication deficits and overall ASD symptoms in school-aged children and adolescents. Only four meta-analyses had a statistically significant pooled effect size in a sensitivity analysis restricted to randomized controlled trials at low risk of detection bias. DISCUSSION This umbrella review confirmed that several psychosocial interventions show promise for improving symptoms related to ASD at different stages of life. However, additional well-designed randomized controlled trials are still required to produce a clearer picture of the efficacy of these interventions. To facilitate the dissemination of scientific knowledge about psychosocial interventions for individuals with ASD, we built an open-access and interactive website that shares the information collected and the results generated during this umbrella review. PRE-REGISTRATION PROSPERO ID CRD42020212630.
Collapse
Affiliation(s)
- Corentin J. Gosling
- Paris Nanterre University, DysCo Laboratory, F-92000 Nanterre, France ,grid.508487.60000 0004 7885 7602Université de Paris, Laboratoire de Psychopathologie et Processus de Santé, F-92100 Boulogne-Billancourt, France ,grid.5491.90000 0004 1936 9297Centre for Innovation in Mental Health (CIMH), School of Psychology, Faculty of Environmental and Life Sciences, University of Southampton, Southampton, UK
| | - Ariane Cartigny
- grid.508487.60000 0004 7885 7602Université de Paris, Laboratoire de Psychopathologie et Processus de Santé, F-92100 Boulogne-Billancourt, France ,grid.413235.20000 0004 1937 0589Department of Child and Adolescent Psychiatry, Robert Debré Hospital, APHP, Paris, France
| | | | - Aleix Solanes
- grid.10403.360000000091771775Imaging of Mood- and Anxiety-Related Disorders (IMARD) Group, Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), CIBERSAM, Barcelona, Spain
| | - Joaquim Radua
- grid.10403.360000000091771775Imaging of Mood- and Anxiety-Related Disorders (IMARD) Group, Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), CIBERSAM, Barcelona, Spain ,grid.13097.3c0000 0001 2322 6764Department of Psychosis Studies, Institute of Psychiatry, Psychology, and Neuroscience, King’s College London, London, UK ,grid.4714.60000 0004 1937 0626Department of Clinical Neuroscience, Centre for Psychiatric Research and Education, Karolinska Institutet, Stockholm, Sweden
| | - Richard Delorme
- grid.413235.20000 0004 1937 0589Department of Child and Adolescent Psychiatry, Robert Debré Hospital, APHP, Paris, France ,grid.428999.70000 0001 2353 6535Human Genetics and Cognitive Functions, Institut Pasteur, Paris, France
| |
Collapse
|
9
|
Hara A, Yoshioka T. Importance of designing sample size, subgroup analysis, and covariate presentation: Towards a better randomized controlled trial. Lung Cancer 2021; 161:200. [PMID: 34147256 DOI: 10.1016/j.lungcan.2021.06.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 06/03/2021] [Indexed: 10/21/2022]
Affiliation(s)
- Akio Hara
- Department of Surgery, Suita Municipal Hospital, 5-7 Kishibeshin-machi, Suita City, Osaka, 564-8465, Japan.
| | - Takashi Yoshioka
- Center for Innovative Research for Communities and Clinical Excellence (CiRC(2)LE), Fukushima Medical University, Fukushima, Japan
| |
Collapse
|
10
|
Tzanetakis GN, Koletsi D. A priori power considerations in Endodontic Research. Do we miss the timeline? Int Endod J 2021; 54:1516-1526. [PMID: 33872405 DOI: 10.1111/iej.13531] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 04/10/2021] [Accepted: 04/13/2021] [Indexed: 12/21/2022]
Abstract
AIM To record the prevalence of a priori power calculations in manuscripts published in three endodontic journals between 2018 and 2020 and detect further associations with a number of study characteristics including journal, publication year, study design, geographic region, number of centres and authors, whether the primary outcome pertained to a statistically significant effect and whether confidence intervals (CIs) were reported. METHODOLOGY The contents of the three leading endodontic journals with the highest impact factor (International Endodontic Journal, IEJ; Journal of Endodontics, JOE; and Australian Endodontic Journal, AEJ) were assessed from January 2018 to December 2020. The proportion of articles reporting a priori power calculations were recorded, and relevant associations as described above were assessed. Univariable and multivariable logistic regression were used to identify significant predictors, whilst interaction and linear trend effects were also considered. RESULTS A total of 716 original research articles were included. The vast majority were published in the JOE (417/716; 58.2%), followed by the IEJ (225/716; 31.4%) and the AEJ (74/716; 10.4%). Overall, a priori power considerations were reported in 243 out of 716 articles (33.9%). The IEJ presented 1.61 times higher odds for including a priori power considerations compared to JOE (adjusted odds ratio, OR = 1.61; 95%CI: 1.11, 2.34), whilst for the AEJ, the corresponding odds were 41% lower in comparison to JOE (adjusted OR = 0.59; 95%CI: 0.31, 1.14). For each additional year indicating more recent publication, the odds for adopting appropriate reporting practices for power considerations were increased by 64% (adjusted OR = 1.64; 95%CIs: 1.32, 2.04). There was strong evidence that interventional research was associated with 10.54 times higher odds for a priori considerations compared to observational study design (adjusted OR = 10.54; 95%CIs: 5.50, 20.19). CONCLUSIONS The high prevalence of failure to include a priori power considerations was indicative of suboptimal reporting in endodontic research, in the three endodontic journals analysed. Although the condition improved over time, efforts to incorporate a correct determination of the required sample size at the design stage for any future study should be endorsed by journal editors, authors and the scientific community.
Collapse
Affiliation(s)
- G N Tzanetakis
- Department of Endodontics, School of Dentistry, National and Kapodistrian University of Athens, Athens, Greece
| | - D Koletsi
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
| |
Collapse
|
11
|
Raittio L, Launonen A, Mattila VM, Reito A. Estimates of the mean difference in orthopaedic randomized trials: obligatory yet obscure. BMC Med Res Methodol 2021; 21:59. [PMID: 33761900 PMCID: PMC7992936 DOI: 10.1186/s12874-021-01249-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 03/08/2021] [Indexed: 11/10/2022] Open
Abstract
Background Randomized controlled trials in orthopaedics are powered to mainly find large effect sizes. A possible discrepancy between the estimated and the real mean difference is a challenge for statistical inference based on p-values. We explored the justifications of the mean difference estimates used in power calculations. The assessment of distribution of observations in the primary outcome and the possibility of ceiling effects were also assessed. Methods Systematic review of the randomized controlled trials with power calculations in eight clinical orthopaedic journals published between 2016 and 2019. Trials with one continuous primary outcome and 1:1 allocation were eligible. Rationales and references for the mean difference estimate were recorded from the Methods sections. The possibility of ceiling effect was addressed by the assessment of the weighted mean and standard deviation of the primary outcome and its elaboration in the Discussion section of each RCT where available. Results 264 trials were included in this study. Of these, 108 (41 %) trials provided some rationale or reference for the mean difference estimate. The most common rationales or references for the estimate of mean difference were minimal clinical important difference (16 %), observational studies on the same subject (8 %) and the ‘clinical relevance’ of the authors (6 %). In a third of the trials, the weighted mean plus 1 standard deviation of the primary outcome reached over the best value in the patient-reported outcome measure scale, indicating the possibility of ceiling effect in the outcome. Conclusions The chosen mean difference estimates in power calculations are rarely properly justified in orthopaedic trials. In general, trials with a patient-reported outcome measure as the primary outcome do not assess or report the possibility of the ceiling effect in the primary outcome or elaborate further in the Discussion section.
Collapse
Affiliation(s)
- Lauri Raittio
- The Faculty of Medicine and Health Technology, Tampere University, Arvo Ylpön katu 34, 33520, Tampere, Finland.
| | - Antti Launonen
- Department of Orthopaedics and Traumatology, Tampere University Hospital, Teiskontie 35, 33520, Tampere, Finland
| | - Ville M Mattila
- The Faculty of Medicine and Health Technology, Tampere University, Arvo Ylpön katu 34, 33520, Tampere, Finland.,Department of Orthopaedics and Traumatology, Tampere University Hospital, Teiskontie 35, 33520, Tampere, Finland
| | - Aleksi Reito
- Department of Orthopaedics and Traumatology, Tampere University Hospital, Teiskontie 35, 33520, Tampere, Finland
| |
Collapse
|
12
|
Sankaran SP, Sonis S. Network meta-analysis from a pairwise meta-analysis design: to assess the comparative effectiveness of oral care interventions in preventing ventilator-associated pneumonia in critically ill patients. Clin Oral Investig 2021; 25:2439-2447. [PMID: 33537946 DOI: 10.1007/s00784-021-03802-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/19/2021] [Indexed: 11/25/2022]
Abstract
OBJECTIVE In this research, we assessed the usefulness of network meta-analysis (NMA), in creating a hierarchy to define the most effective oral care intervention for the prevention and management of ventilation-associated pneumonia (VAP). MATERIALS AND METHODS We applied NMA to a previously published robust pairwise meta-analysis. Statistical analyses were based on comparing rates of total VAP events between intervention groups and placebo-usual care groups. We synthesized a netgraph, reported the ranking order of the interventions, and summarized output by a forest plot with a reference treatment placebo/usual care. RESULTS The results of this NMA are from the low and high risk of bias studies, and hence, we strongly recommend not to use findings of this NMA for clinical treatment needs, but based on results of the NMA, we highly recommend for future clinical trials. With our inclusion and exclusion criteria for the NMA, we extracted 25 studies (4473 subjects). The NMA included 16 treatments, 29 pairwise comparisons, and 15 designs. Based on results of NMA frequentist-ranking P scores, tooth brushing (P fixed-0.94, P random-0.89), tooth brushing with povidone-iodine (P fixed-0.90, P random-0.88), and furacillin (P fixed-0.88, P random-0.84) were the best three interventions for preventing VAP. CONCLUSIONS Any conclusion drawn from this NMA should be taken with caution and recommend future clinical trials with the results. CLINICAL RELEVANCE NMA appeared to be an effective platform from which multiple interventions reported in disparate clinical trials could be compared to derive a hierarchical assessment of efficacy in VAP intervention.
Collapse
Affiliation(s)
- Satheeshkumar P Sankaran
- Harvard Medical School, Boston, 02115, MA, USA.
- Department of Oral Oncology, Roswell Park Comprehensive Cancer Center, Buffalo, 14263, NY, USA.
| | - Stephen Sonis
- Brigham and Women's Hospital and the Harvard School of Dental Medicine, Boston, 02115, MA, USA
| |
Collapse
|
13
|
Sidebotham D. Are most randomised trials in anaesthesia and critical care wrong? An analysis using Bayes’ theorem. Anaesthesia 2020; 75:1386-1393. [DOI: 10.1111/anae.15029] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2020] [Indexed: 01/20/2023]
Affiliation(s)
- D. Sidebotham
- Department of Anaesthesia and the Cardiothoracic and Vascular Intensive Care Unit Auckland City Hospital New Zealand
| |
Collapse
|
14
|
[Influence of impact factor on reporting sample size calculations in publications on studies exemplified by AMD treatment : Cross-sectional investigation on the presence of sample size calculations in publications of RCTs on AMD treatment in journals with low and high impact factors]. Ophthalmologe 2020; 117:125-131. [PMID: 31201561 DOI: 10.1007/s00347-019-0924-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
BACKGROUND For scientific and ethical reasons randomized controlled clinical trials (RCTs) should be based on a sample size calculation. The CONSORT statement, an established publication guideline for transparent study reporting, requires a sample size calculation in every study publication. OBJECTIVE The availability of sample size calculations in RCT publications on treatment of age-related macular degeneration (AMD) was investigated. The primary hypothesis of this investigation compared the prevalence of reported sample size calculations between journals with higher (≥5) versus lower (<5) impact factors (IF). MATERIAL AND METHODS It was examined whether information on sample size calculation was available in a series of 97 publications of RTCs on AMD treatment published between 2004 and 2014. RESULTS Only 46 out of 97 (47%) study publications provided information on the reason for the number of patients enrolled. The comparison of publications from journals with an IF ≥ 5 (63%, 30) and from journals with an IF < 5 (40%, 67) showed a statistically significant difference of 23% in the frequencies of available sample size calculations (95% confidence interval, CI 2%; 44%). Of the publications published before 2010, 43% reported a sample size calculation versus 51% of the publications afterwards. CONCLUSION Publications in journals with higher IF more frequently reported a sample size calculation. More than 50% of the publications did not report any sample size calculation. Authors and reviewers of publications should pay more attention to the explicit reporting of sample size calculations.
Collapse
|
15
|
Rauch G, Hafermann L, Mansmann U, Pigeot I. Comprehensive survey among statistical members of medical ethics committees in Germany on their personal impression of completeness and correctness of biostatistical aspects of submitted study protocols. BMJ Open 2020; 10:e032864. [PMID: 32024788 PMCID: PMC7044913 DOI: 10.1136/bmjopen-2019-032864] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES To assess biostatistical quality of study protocols submitted to German medical ethics committees according to personal appraisal of their statistical members. DESIGN We conducted a web-based survey among biostatisticians who have been active as members in German medical ethics committees during the past 3 years. SETTING The study population was identified by a comprehensive web search on websites of German medical ethics committees. PARTICIPANTS The final list comprised 86 eligible persons. In total, 57 (66%) completed the survey. QUESTIONNAIRE The first item checked whether the inclusion criterion was met. The last item assessed satisfaction with the survey. Four items aimed to characterise the medical ethics committee in terms of type and location, one item asked for the urgency of biostatistical training addressed to the medical investigators. The main 2×12 items reported an individual assessment of the quality of biostatistical aspects in the submitted study protocols, while distinguishing studies according to the German Medicines Act (AMG)/German Act on Medical Devices (MPG) and studies non-regulated by these laws. PRIMARY AND SECONDARY OUTCOME MEASURES The individual assessment of the quality of biostatistical aspects corresponds to the primary objective. Thus, participants were asked to complete the sentence 'In x% of the submitted study protocols, the following problem occurs', where 12 different statistical problems were formulated. All other items assess secondary endpoints. RESULTS For all biostatistical aspects, 45 of 49 (91.8%) participants judged the quality of AMG/MPG study protocols much better than that of 'non-regulated' studies. The latter are in median affected 20%-60% more often by statistical problems. The highest need for training was reported for sample size calculation, missing values and multiple comparison procedures. CONCLUSIONS Biostatisticians being active in German medical ethics committees classify the biostatistical quality of study protocols as low for 'non-regulated' studies, whereas quality is much better for AMG/MPG studies.
Collapse
Affiliation(s)
- Geraldine Rauch
- Institute of Biometry and Clinical Epidemiology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
- Berlin Institute of Health, Berlin, Germany
| | - Lorena Hafermann
- Institute of Biometry and Clinical Epidemiology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
- Berlin Institute of Health, Berlin, Germany
| | - Ulrich Mansmann
- Institute for Medical Information Processing, Biometry, and Epidemiology, Ludwig-Maximilians-Universitat Munich, Munich, Germany
| | - Iris Pigeot
- Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany
- University of Bremen, Institute of Statistics, Bremen, Germany
| |
Collapse
|
16
|
Tulka S, Geis B, Baulig C, Knippschild S, Krummenauer F. Validity of sample sizes in publications of randomised controlled trials on the treatment of age-related macular degeneration: cross-sectional evaluation. BMJ Open 2019; 9:e030312. [PMID: 31601589 PMCID: PMC6797239 DOI: 10.1136/bmjopen-2019-030312] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
OBJECTIVE The aim of this cross-sectional study was to examine the completeness and accuracy of the reporting of sample size calculations in randomised controlled trial (RCT) publications on the treatment of age-related macular degeneration (AMD). METHODS A sample of 97 RCTs published between 2004 and 2014 was reviewed for the calculation of their sample size. It was examined whether a (complete) description of the sample size calculation was presented. Furthermore, the sample size was recalculated, whenever possible based on the published details, in order to verify the reported number of patients. PRIMARY OUTCOME MEASURE The primary endpoint of this cross-sectional investigation was a described sample size calculation that was reproducible, complete and correct (maximum tolerated deviation between reported and replicated sample size ±2 participants per trial arm). RESULTS A total of 50 publications (52%) did not provide any information on the justification of the number of patients included. Only 17 publications (18%) provided all the necessary parameters for recalculation; 8 of 97 (8%, 95%-CI: 4% to 16%) publications achieved the primary endpoint. The median relative deviation between reported and recalculated sample sizes was 1%, with a range from -43% to +66%. CONCLUSION Although a transparent sample size legitimation is a crucial determinant of an RCT's methodological validity, more than half of the RCT publications considered failed to report them. Furthermore, reported sample size legitimations were often incomplete or incorrect. In summary, clinical authors should pay more attention to the transparent reporting of sample size calculation, and clinical journal reviewers may opt to reproduce reported sample size calculations. SYNOPSIS More than half of the analysed RCT publications on the treatment of AMD did not report a transparent sample size calculation. Only 8% reported a complete and correct sample size calculation.
Collapse
Affiliation(s)
- Sabrina Tulka
- Institute for Medical Biometry and Epidemiology, University Witten Herdecke Faculty of Health, Witten, Germany
| | - Berit Geis
- Institute for Medical Biometry and Epidemiology, University Witten Herdecke Faculty of Health, Witten, Germany
| | - Christine Baulig
- Institute for Medical Biometry and Epidemiology, University Witten Herdecke Faculty of Health, Witten, Germany
| | - Stephanie Knippschild
- Institute for Medical Biometry and Epidemiology, University Witten Herdecke Faculty of Health, Witten, Germany
| | - Frank Krummenauer
- Institute for Medical Biometry and Epidemiology, University Witten Herdecke Faculty of Health, Witten, Germany
| |
Collapse
|
17
|
Nikolakopoulou A, Trelle S, Sutton AJ, Egger M, Salanti G. Synthesizing existing evidence to design future trials: survey of methodologists from European institutions. Trials 2019; 20:334. [PMID: 31174597 PMCID: PMC6555919 DOI: 10.1186/s13063-019-3449-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 05/13/2019] [Indexed: 12/26/2022] Open
Abstract
Background ‘Conditional trial design’ is a framework for efficiently planning new clinical trials based on a network of relevant existing trials. The framework considers whether new trials are required and how the existing evidence can be used to answer the research question and plan future research. The potential of this approach has not been fully realized. Methods We conducted an online survey among trial statisticians, methodologists, and users of evidence synthesis research using referral sampling to capture opinions about the conditional trial design framework and current practices among clinical researchers. The questions included in the survey were related to the decision of whether a meta-analysis answers the research question, the optimal way to synthesize available evidence, which relates to the acceptability of network meta-analysis, and the use of evidence synthesis in the planning of new studies. Results In total, 76 researchers completed the survey. Two out of three survey participants (65%) were willing to possibly or definitely consider using evidence synthesis to design a future clinical trial and around half of the participants would give priority to such a trial design. The median rating of the frequency of using such a trial design was 0.41 on a scale from 0 (never) to 1 (always). Major barriers to adopting conditional trial design include the current regulatory paradigm and the policies of funding agencies and sponsors. Conclusions Participants reported moderate interest in using evidence synthesis methods in the design of future trials. They indicated that a major paradigm shift is required before the use of network meta-analysis is regularly employed in the design of trials. Electronic supplementary material The online version of this article (10.1186/s13063-019-3449-6) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Adriani Nikolakopoulou
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.
| | - Sven Trelle
- CTU Bern, University of Bern, Bern, Switzerland
| | - Alex J Sutton
- Department of Health Sciences, College of Medicine, Biological Sciences and Psychology, University of Leicester, Leicester, UK
| | - Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
18
|
Is Pelvic Plexus Block Superior to Periprostatic Nerve Block for Pain Control during Transrectal Ultrasonography-Guided Prostate Biopsy? A Double-Blind, Randomized Controlled Trial. J Clin Med 2019; 8:jcm8040557. [PMID: 31022977 PMCID: PMC6517998 DOI: 10.3390/jcm8040557] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 04/23/2019] [Indexed: 11/16/2022] Open
Abstract
We evaluated whether pelvic plexus block (PPB) is superior to periprostatic nerve block (PNB) for pain control during transrectal ultrasonography (TRUS)-guided prostate biopsy (PBx). A prospective, double-blind, randomized, controlled study was performed at a single center; 46 patients were enrolled and randomly allocated into two groups: PPB (n = 23) and PNB (n = 23). The visual analogue scale (VAS) was used; pain scores were measured four times: during local anesthesia, probe insertion, sampling procedures, and at 15 min post procedures. No significant differences were observed in VAS scores during local anesthesia (2.30 for PPB vs. 2.65 for PNB, p = 0.537) or during probe insertion (2.83 for PPB vs. 2.39 for PNB, p = 0.569). Similarly, no differences in VAS scores were detected during the sampling procedures (2.83 for PPB vs. 2.87 for PNB, p = 0.867) and at 15 min post procedures (1.39 for PPB vs. 1.26 for PNB, p = 0.631). No major complications were noted in either group. Both PPB and PNB are comparably effective and safe methods for PBx related pain relief, and PPB is not superior to PNB. Local anesthetic method could be selected based on the preference and skill of the operator.
Collapse
|
19
|
Copsey B, Thompson JY, Vadher K, Ali U, Dutton SJ, Fitzpatrick R, Lamb SE, Cook JA. Sample size calculations are poorly conducted and reported in many randomized trials of hip and knee osteoarthritis: results of a systematic review. J Clin Epidemiol 2018; 104:52-61. [DOI: 10.1016/j.jclinepi.2018.08.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Revised: 07/20/2018] [Accepted: 08/17/2018] [Indexed: 12/22/2022]
|
20
|
Clark T, Wicentowski RH, Sydes MR. Cross-sectional analysis of UK research studies in 2015: results from a scoping project with the UK Health Research Authority. BMJ Open 2018; 8:e022340. [PMID: 30337312 PMCID: PMC6196875 DOI: 10.1136/bmjopen-2018-022340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 05/22/2018] [Accepted: 08/23/2018] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES To determine whether data on research studies held by the UK Health Research Authority (HRA) could be summarised automatically with minimal manual intervention. There are numerous initiatives to reduce research waste by improving the design, conduct, analysis and reporting of clinical studies. However, quantitative data on the characteristics of clinical studies and the impact of the various initiatives are limited. DESIGN Feasibility study, using 1 year of data. SETTING We worked with the HRA on a pilot study using research applications submitted for UK-wide ethical review. We extracted into a single dataset, information held in anonymised XML files by the Integrated Research Application System (IRAS) and the HRA Assessment Review Portal (HARP). Research applications from 2014 to 2016 were provided. We used standard text extraction methods to assess information held in free-text fields. We use simple, descriptive methods to summarise the research activities that we extracted. PARTICIPANTS Not applicable-records-based study INTERVENTIONS: Not applicable. PRIMARY AND SECONDARY OUTCOME MEASURES Feasibility of extraction and processing. RESULTS We successfully imported 1775 non-duplicate research applications from the XML files into a single database. Of these, 963 were randomised controlled trials and 812 were other studies. Most studies received a favourable opinion. There was limited patient and public involvement in the studies. Most, but not all, studies were planned for publication of results. Novel study designs (eg, adaptive and Bayesian designs) were infrequently reported. CONCLUSIONS We have demonstrated that the data submitted from IRAS to the HRA and its HARP system are accessible and can be queried for information. We strongly encourage the development of fully resourced collaborative projects to further this work. This would aid understanding of how study characteristics change over time and across therapeutic areas, as well as the progress of initiatives to improve the quality and relevance of research studies.
Collapse
Affiliation(s)
- Tim Clark
- Faculty of Medicine, Institut für Medizinische Informationsverarbeitung, Biometrie und Epidemiologie (IBE), Ludwig-Maximilians University, Munich, Germany
| | | | - Matthew R Sydes
- MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, University College London, London, UK
| |
Collapse
|
21
|
Jones HE, Ades AE, Sutton AJ, Welton NJ. Use of a random effects meta-analysis in the design and analysis of a new clinical trial. Stat Med 2018; 37:4665-4679. [PMID: 30187505 PMCID: PMC6484819 DOI: 10.1002/sim.7948] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Revised: 06/29/2018] [Accepted: 07/28/2018] [Indexed: 01/08/2023]
Abstract
In designing a randomized controlled trial, it has been argued that trialists should consider existing evidence about the likely intervention effect. One approach is to form a prior distribution for the intervention effect based on a meta‐analysis of previous studies and then power the trial on its ability to affect the posterior distribution in a Bayesian analysis. Alternatively, methods have been proposed to calculate the power of the trial to influence the “pooled” estimate in an updated meta‐analysis. These two approaches can give very different results if the existing evidence is heterogeneous, summarised using a random effects meta‐analysis. We argue that the random effects mean will rarely represent the trialist's target parameter, and so, it will rarely be appropriate to power a trial based on its impact upon the random effects mean. Furthermore, the random effects mean will not generally provide an appropriate prior distribution. More appropriate alternatives include the predictive distribution and shrinkage estimate for the most similar study. Consideration of the impact of the trial on the entire random effects distribution might sometimes be appropriate. We describe how beliefs about likely sources of heterogeneity have implications for how the previous evidence should be used and can have a profound impact on the expected power of the new trial. We conclude that the likely causes of heterogeneity among existing studies need careful consideration. In the absence of explanations for heterogeneity, we suggest using the predictive distribution from the meta‐analysis as the basis for a prior distribution for the intervention effect.
Collapse
Affiliation(s)
- Hayley E Jones
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - A E Ades
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - Alex J Sutton
- Department of Health Sciences, University of Leicester, Leicester, UK
| | - Nicky J Welton
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| |
Collapse
|
22
|
Flege MM, Thomsen SF. Sample size estimation practices in research protocols submitted to Danish scientific ethics committees. Contemp Clin Trials Commun 2018; 11:165-169. [PMID: 30140776 PMCID: PMC6104346 DOI: 10.1016/j.conctc.2018.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 08/01/2018] [Accepted: 08/13/2018] [Indexed: 11/17/2022] Open
Abstract
Background Sample size in research projects is estimated before initiation of the study to minimise type 1 and type 2 error, while keeping the study's financial cost and subject enrolment to a minimum. This study investigates project-specific factors potentially associated with correct estimation of sample size in study protocols. Methods Examination of 189 non-commercially sponsored study protocols (84 randomised controlled trials (RCTs) and 105 non-RCT studies) submitted to the Scientific Ethics Committees of The Capitol Region of Denmark from 2013 to 2015. Results 119 (63%) study protocols contained a sample size calculation, with a significantly higher rate of sample size calculations in RCT vs non-RCT study protocols (76% vs. 52%, p < 0.001). Significantly more intervention studies than non-intervention studies (69% vs 52%, p = 0.020), studies including blood samples compared to those without (69% vs. 55%, p = 0.045), studies funded by a foundation donation compared to those with no funding (68% vs. 49%, p = 0.040) performed sample size calculations. Further, increasing number of sick patients enrolled (p = 0.048) and newer studies (p = 0.032) were more likely to include a sample size calculation in the protocol. Conclusions Estimation of sample size is more often reported in RCT than non-RCT study protocols. Also, intervention studies, studies funded by a foundation donation, studies including blood samples, studies with a greater amount of sick participants and chronologically newer study protocols more often reported a sample size calculation.
Collapse
Affiliation(s)
| | - Simon Francis Thomsen
- Department of Dermatology, Bispebjerg Hospital, Copenhagen, Denmark
- Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- Corresponding author. Department of Dermatology, Bispebjerg Hospital, Bispebjerg Bakke 23, DK-2400, Copenhagen, NV, Denmark.
| |
Collapse
|
23
|
Salanti G, Nikolakopoulou A, Sutton AJ, Reichenbach S, Trelle S, Naci H, Egger M. Planning a future randomized clinical trial based on a network of relevant past trials. Trials 2018; 19:365. [PMID: 29996869 PMCID: PMC6042258 DOI: 10.1186/s13063-018-2740-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Accepted: 06/12/2018] [Indexed: 11/16/2022] Open
Abstract
Background The important role of network meta-analysis of randomized clinical trials in health technology assessment and guideline development is increasingly recognized. This approach has the potential to obtain conclusive results earlier than with new standalone trials or conventional, pairwise meta-analyses. Methods Network meta-analyses can also be used to plan future trials. We introduce a four-step framework that aims to identify the optimal design for a new trial that will update the existing evidence while minimizing the required sample size. The new trial designed within this framework does not need to include all competing interventions and comparisons of interest and can contribute direct and indirect evidence to the updated network meta-analysis. We present the method by virtually planning a new trial to compare biologics in rheumatoid arthritis and a new trial to compare two drugs for relapsing-remitting multiple sclerosis. Results A trial design based on updating the evidence from a network meta-analysis of relevant previous trials may require a considerably smaller sample size to reach the same conclusion compared with a trial designed and analyzed in isolation. Challenges of the approach include the complexity of the methodology and the need for a coherent network meta-analysis of previous trials with little heterogeneity. Conclusions When used judiciously, conditional trial design could significantly reduce the required resources for a new study and prevent experimentation with an unnecessarily large number of participants. Electronic supplementary material The online version of this article (10.1186/s13063-018-2740-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.
| | - Adriani Nikolakopoulou
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Alex J Sutton
- Department of Health Sciences, College of Medicine, Biological Sciences and Psychology, University of Leicester, Leicester, UK
| | - Stephan Reichenbach
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.,Department of Rheumatology, Immunology and Allergiology, University Hospital, University of Bern, Bern, Switzerland
| | - Sven Trelle
- CTU Bern, University of Bern, Bern, Switzerland
| | - Huseyin Naci
- LSE Health, Department of Health Policy, London School of Economics and Political Science, London, UK
| | - Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
24
|
Nikolakopoulou A, Mavridis D, Furukawa TA, Cipriani A, Tricco AC, Straus SE, Siontis GCM, Egger M, Salanti G. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study. BMJ 2018; 360:k585. [PMID: 29490922 PMCID: PMC5829520 DOI: 10.1136/bmj.k585] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/22/2018] [Indexed: 12/01/2022]
Abstract
OBJECTIVE To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. DESIGN Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. DATA SOURCES Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. ELIGIBILITY CRITERIA FOR STUDY SELECTION Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). OUTCOMES AND ANALYSIS Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. RESULTS 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing the treatments of interest continued to be published for eight comparisons after strong evidence had become evident in network meta-analysis. CONCLUSIONS In comparative effectiveness research, prospectively planned living network meta-analyses produced strong evidence against the null hypothesis more often and earlier than conventional, pairwise meta-analyses.
Collapse
Affiliation(s)
- Adriani Nikolakopoulou
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Dimitris Mavridis
- Department of Primary Education, University of Ioannina, Ioannina, Greece
- Centre de Recherche Épidémiologie et Statistique Sorbonne Paris Cité, Inserm/Université Paris Descartes, Paris, France
| | - Toshi A Furukawa
- Departments of Health Promotion and Human Behavior and of Clinical Epidemiology, Kyoto University Graduate School of Medicine/School of Public Health, Kyoto, Japan
| | - Andrea Cipriani
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
- Oxford Health NHS Foundation Trust, Warneford Hospital, Oxford, UK
| | - Andrea C Tricco
- Knowledge Translation Program, Li Ka Shing Knowledge Institute, St Michael's Hospital, Toronto, Ontario, Canada
- Epidemiology Division, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Sharon E Straus
- Knowledge Translation Program, Li Ka Shing Knowledge Institute, St Michael's Hospital, Toronto, Ontario, Canada
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | | | - Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
25
|
Copsey B, Dutton S, Fitzpatrick R, Lamb SE, Cook JA. Current practice in methodology and reporting of the sample size calculation in randomised trials of hip and knee osteoarthritis: a protocol for a systematic review. Trials 2017; 18:466. [PMID: 29017518 PMCID: PMC5634891 DOI: 10.1186/s13063-017-2209-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 09/25/2017] [Indexed: 12/04/2022] Open
Abstract
Background A key aspect of the design of randomised controlled trials (RCTs) is determining the sample size. It is important that the trial sample size is appropriately calculated. The required sample size will differ by clinical area, for instance, due to the prevalence of the condition and the choice of primary outcome. Additionally, it will depend upon the choice of target difference assumed in the calculation. Focussing upon the hip and knee osteoarthritis population, this study aims to systematically review how the trial size was determined for trials of osteoarthritis, on what basis, and how well these aspects are reported. Methods Several electronic databases (Medline, Cochrane library, CINAHL, EMBASE, PsycINFO, PEDro and AMED) will be searched to identify articles on RCTs of hip and knee osteoarthritis published in 2016. Articles will be screened for eligibility and data extracted independently by two reviewers. Data will be extracted on study characteristics (design, population, intervention and control treatments), primary outcome, chosen sample size and justification, parameters used to calculate the sample size (including treatment effect in control arm, level of variability in primary outcome, loss to follow-up rates). Data will be summarised across the studies using appropriate summary statistics (e.g. n and %, median and interquartile range). The proportion of studies which report each key component of the sample size calculation will be presented. The reproducibility of the sample size calculation will be tested. Discussion The findings of this systematic review will summarise the current practice for sample size calculation in trials of hip and knee osteoarthritis. It will also provide evidence on the completeness of the reporting of the sample size calculation, reproducibility of the chosen sample size and the basis for the values used in the calculation. Trial registration As this review was not eligible to be registered on PROSPERO, the summary information was uploaded to Figshare to make it publicly accessible in order to avoid unnecessary duplication amongst other benefits (https://doi.org/10.6084/m9.figshare.5009027.v1); Registered January 17, 2017. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-2209-8) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Bethan Copsey
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK.
| | - Susan Dutton
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Ray Fitzpatrick
- Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Sarah E Lamb
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Jonathan A Cook
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
26
|
Biau DJ, Boulezaz S, Casabianca L, Hamadouche M, Anract P, Chevret S. Using Bayesian statistics to estimate the likelihood a new trial will demonstrate the efficacy of a new treatment. BMC Med Res Methodol 2017; 17:128. [PMID: 28830464 PMCID: PMC5568256 DOI: 10.1186/s12874-017-0401-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Accepted: 08/02/2017] [Indexed: 12/29/2022] Open
Abstract
Background The common frequentist approach is limited in providing investigators with appropriate measures for conducting a new trial. To answer such important questions and one has to look at Bayesian statistics. Methods As a worked example, we conducted a Bayesian cumulative meta-analysis to summarize the benefit of patient-specific instrumentation on the alignment of total knee replacement from previously published evidence. Data were sourced from Medline, Embase, and Cochrane databases. All randomised controlled comparisons of the effect of patient-specific instrumentation on the coronal alignment of total knee replacement were included. The main outcome was the risk difference measured by the proportion of failures in the control group minus the proportion of failures in the experimental group. Through Bayesian statistics, we estimated cumulatively over publication time of the trial results: the posterior probabilities that the risk difference was more than 5 and 10%; the posterior probabilities that given the results of all previous published trials an additional fictive trial would achieve a risk difference of at least 5%; and the predictive probabilities that observed failure rate differ from 5% across arms. Results Thirteen trials were identified including 1092 patients, 554 in the experimental group and 538 in the control group. The cumulative mean risk difference was 0.5% (95% CrI: −5.7%; +4.5%). The posterior probabilities that the risk difference be superior to 5 and 10% was less than 5% after trial #4 and trial #2 respectively. The predictive probability that the difference in failure rates was at least 5% dropped from 45% after the first trial down to 11% after the 13th. Last, only unrealistic trial design parameters could change the overall evidence accumulated to date. Conclusions Bayesian probabilities are readily understandable when discussing the relevance of performing a new trial. It provides investigators the current probability that an experimental treatment be superior to a reference treatment. In case a trial is designed, it also provides the predictive probability that this new trial will reach the targeted risk difference in failure rates. Trial registration CRD42015024176. Electronic supplementary material The online version of this article (doi:10.1186/s12874-017-0401-x) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- David J Biau
- INSERM U1153, Paris, France. .,Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France. .,Université Paris-Descartes, Paris 5, Paris, France.
| | - Samuel Boulezaz
- Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Laurent Casabianca
- Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Moussa Hamadouche
- Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Philippe Anract
- INSERM U1153, Paris, France.,Service de chirurgie orthopédique,Hôpital Cochin, 27 rue du faubourg Saint-Jacques, 75014, Paris, France
| | - Sylvie Chevret
- INSERM U1153, Paris, France.,Université Paris-Diderot, Paris 7, Paris, France
| |
Collapse
|
27
|
QUALITY OF SAMPLE SIZE ESTIMATION IN TRIALS OF MEDICAL DEVICES: HIGH-RISK DEVICES FOR NEUROLOGICAL CONDITIONS AS EXAMPLE. Int J Technol Assess Health Care 2017; 33:103-110. [PMID: 28502271 DOI: 10.1017/s0266462317000265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND The aim of this study was to assess the quality of reporting sample size calculation and underlying design assumptions in pivotal trials of high-risk medical devices (MDs) for neurological conditions. METHODS Systematic review of research protocols for publicly registered randomized controlled trials (RCTs). In the absence of a published protocol, principal investigators were contacted for additional data. To be included, trials had to investigate a high-risk MD, registered between 2005 and 2015, with indications stroke, headache disorders, and epilepsy as case samples within central nervous system diseases. Extraction of key methodological parameters for sample size calculation was performed independently and peer-reviewed. RESULTS In a final sample of seventy-one eligible trials, we collected data from thirty-one trials. Eighteen protocols were obtained from the public domain or principal investigators. Data availability decreased during the extraction process, with almost all data available for stroke-related trials. Of the thirty-one trials with sample size information available, twenty-six reported a predefined calculation and underlying assumptions. Justification was given in twenty and evidence for parameter estimation in sixteen trials. Estimates were most often based on previous research, including RCTs and observational data. Observational data were predominantly represented by retrospective designs. Other references for parameter estimation indicated a lower level of evidence. CONCLUSIONS Our systematic review of trials on high-risk MDs confirms previous research, which has documented deficiencies regarding data availability and a lack of reporting on sample size calculation. More effort is needed to ensure both relevant sources, that is, original research protocols, to be publicly available and reporting requirements to be standardized.
Collapse
|
28
|
Lee PH, Tse ACY. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed. Eur J Intern Med 2017; 40:16-21. [PMID: 27769569 DOI: 10.1016/j.ejim.2016.10.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Revised: 09/29/2016] [Accepted: 10/10/2016] [Indexed: 12/19/2022]
Abstract
BACKGROUND There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. METHODS We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. RESULTS Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. CONCLUSIONS The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed.
Collapse
Affiliation(s)
- Paul H Lee
- School of Nursing, Hong Kong Polytechnic University, Hong kong.
| | - Andy C Y Tse
- Department of Health and Physical Education, The Education University of Hong Kong, Hong kong
| |
Collapse
|
29
|
Sakakura K, Funayama H, Taniguchi Y, Tsurumaki Y, Yamamoto K, Matsumoto M, Wada H, Momomura SI, Fujita H. The incidence of slow flow after rotational atherectomy of calcified coronary arteries: A randomized study of low speed versus high speed. Catheter Cardiovasc Interv 2016; 89:832-840. [PMID: 27453426 DOI: 10.1002/ccd.26698] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Accepted: 07/11/2016] [Indexed: 11/11/2022]
Abstract
OBJECTIVES The purpose of this randomized trial was to compare the incidence of slow flow between low-speed and high-speed rotational atherectomy (RA) of calcified coronary lesions. BACKGROUND Preclinical studies suggest that slow flow is less frequently observed with low-speed than high-speed RA because of less platelet aggregation with low-speed RA. METHODS This was a prospective, randomized, single center study. A total of 100 patients with calcified coronary lesions were enrolled and randomly assigned in a 1:1 ratio to low-speed (140,000 rpm) or high-speed (190,000 rpm) RA. The primary endpoint was the occurrence of slow flow following RA. Slow flow was defined as slow or absent distal runoff (Thrombolysis in Myocardial Infarction [TIMI] flow grade ≤ 2). RESULTS The incidence of slow flow in the low-speed group (24%) was the same as that in the high-speed group (24%) (P = 1.00; odds ratio, 1.00; 95% confidence interval, 0.40-2.50). The frequencies of TIMI 3, TIMI 2, TIMI 1, and TIMI 0 flow grades were similar between the low-speed (TIMI 3, 76%; TIMI 2, 14%; TIMI 1, 8%; TIMI 0, 2%) and high-speed (TIMI 3, 76%; TIMI 2, 14%; TIMI 1, 10%; TIMI 0, 0%) groups (P = 0.77 for trend). The incidence of periprocedural myocardial infarction was the same between the low-speed (6%) and high-speed (6%) groups (P = 1.00). CONCLUSIONS This randomized trial did not show a reduction in the incidence of slow flow following low-speed RA as compared with high-speed RA (UMIN ID: UMIN000015702). © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Kenichi Sakakura
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Hiroshi Funayama
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Yousuke Taniguchi
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Yoshimasa Tsurumaki
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Kei Yamamoto
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Mitsunari Matsumoto
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Hiroshi Wada
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Shin-Ichi Momomura
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| | - Hideo Fujita
- Division of Cardiovascular Medicine, Saitama Medical Center, Jichi Medical University, 1-847 Amanuma, Omiya, Saitama City, Japan, 330-8503
| |
Collapse
|
30
|
Finding Alternatives to the Dogma of Power Based Sample Size Calculation: Is a Fixed Sample Size Prospective Meta-Experiment a Potential Alternative? PLoS One 2016; 11:e0158604. [PMID: 27362939 PMCID: PMC4928786 DOI: 10.1371/journal.pone.0158604] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Accepted: 06/17/2016] [Indexed: 11/19/2022] Open
Abstract
Sample sizes for randomized controlled trials are typically based on power calculations. They require us to specify values for parameters such as the treatment effect, which is often difficult because we lack sufficient prior information. The objective of this paper is to provide an alternative design which circumvents the need for sample size calculation. In a simulation study, we compared a meta-experiment approach to the classical approach to assess treatment efficacy. The meta-experiment approach involves use of meta-analyzed results from 3 randomized trials of fixed sample size, 100 subjects. The classical approach involves a single randomized trial with the sample size calculated on the basis of an a priori-formulated hypothesis. For the sample size calculation in the classical approach, we used observed articles to characterize errors made on the formulated hypothesis. A prospective meta-analysis of data from trials of fixed sample size provided the same precision, power and type I error rate, on average, as the classical approach. The meta-experiment approach may provide an alternative design which does not require a sample size calculation and addresses the essential need for study replication; results may have greater external validity.
Collapse
|
31
|
Guthrie S, Bienkowska-Gibbs T, Manville C, Pollitt A, Kirtley A, Wooding S. The impact of the National Institute for Health Research Health Technology Assessment programme, 2003-13: a multimethod evaluation. Health Technol Assess 2016; 19:1-291. [PMID: 26307643 DOI: 10.3310/hta19670] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND The National Institute for Health Research (NIHR) Health Technology Assessment (HTA) programme supports research tailored to the needs of NHS decision-makers, patients and clinicians. This study reviewed the impact of the programme, from 2003 to 2013, on health, clinical practice, health policy, the economy and academia. It also considered how HTA could maintain and increase its impact. METHODS Interviews (n = 20): senior stakeholders from academia, policy-making organisations and the HTA programme. Bibliometric analysis: citation analysis of publications arising from HTA programme-funded research. Researchfish survey: electronic survey of all HTA grant holders. Payback case studies (n = 12): in-depth case studies of HTA programme-funded research. RESULTS We make the following observations about the impact, and routes to impact, of the HTA programme: it has had an impact on patients, primarily through changes in guidelines, but also directly (e.g. changing clinical practice); it has had an impact on UK health policy, through providing high-quality scientific evidence - its close relationships with the National Institute for Health and Care Excellence (NICE) and the National Screening Committee (NSC) contributed to the observed impact on health policy, although in some instances other organisations may better facilitate impact; HTA research is used outside the UK by other HTA organisations and systematic reviewers - the programme has an impact on HTA practice internationally as a leader in HTA research methods and the funding of HTA research; the work of the programme is of high academic quality - the Health Technology Assessment journal ensures that the vast majority of HTA programme-funded research is published in full, while the HTA programme still encourages publication in other peer-reviewed journals; academics agree that the programme has played an important role in building and retaining HTA research capacity in the UK; the HTA programme has played a role in increasing the focus on effectiveness and cost-effectiveness in medicine - it has also contributed to increasingly positive attitudes towards HTA research both within the research community and the NHS; and the HTA focuses resources on research that is of value to patients and the UK NHS, which would not otherwise be funded (e.g. where there is no commercial incentive to undertake research). The programme should consider the following to maintain and increase its impact: providing targeted support for dissemination, focusing resources when important results are unlikely to be implemented by other stakeholders, particularly when findings challenge vested interests; maintaining close relationships with NICE and the NSC, but also considering other potential users of HTA research; maintaining flexibility and good relationships with researchers, giving particular consideration to the Technology Assessment Report (TAR) programme and the potential for learning between TAR centres; maintaining the academic quality of the work and the focus on NHS need; considering funding research on the short-term costs of the implementation of new health technologies; improving the monitoring and evaluation of whether or not patient and public involvement influences research; improve the transparency of the priority-setting process; and continuing to monitor the impact and value of the programme to inform its future scientific and administrative development.
Collapse
|
32
|
Martin J, Taljaard M, Girling A, Hemming K. Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials. BMJ Open 2016; 6:e010166. [PMID: 26846897 PMCID: PMC4746455 DOI: 10.1136/bmjopen-2015-010166] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 11/09/2015] [Accepted: 12/03/2015] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. METHODS We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. RESULTS We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5-6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. DISCUSSION The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs.
Collapse
Affiliation(s)
- James Martin
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Alan Girling
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| | - Karla Hemming
- School of Health and Population Sciences, University of Birmingham, Birmingham, UK
| |
Collapse
|
33
|
Abstract
Addressing a sample size is a practical issue that has to be solved during planning and designing stage of the study. The aim of any clinical research is to detect the actual difference between two groups (power) and to provide an estimate of the difference with a reasonable accuracy (precision). Hence, researchers should do a priori estimate of sample size well ahead, before conducting the study. Post hoc sample size computation is not encouraged conventionally. Adequate sample size minimizes the random error or in other words, lessens something happening by chance. Too small a sample may fail to answer the research question and can be of questionable validity or provide an imprecise answer while too large a sample may answer the question but is resource-intensive and also may be unethical. More transparency in the calculation of sample size is required so that it can be justified and replicated while reporting.
Collapse
Affiliation(s)
- Sabyasachi Das
- Department of Anaesthesiology and Critical Care, Medical College, Kolkata, West Bengal, India
| | - Koel Mitra
- Department of Anaesthesiology and Critical Care, Medical College, Kolkata, West Bengal, India
| | - Mohanchandra Mandal
- Department of Anaesthesiology and Critical Care, North Bengal Medical College, Sushrutanagar, Darjeeling, West Bengal, India
| |
Collapse
|
34
|
Bhurke S, Cook A, Tallant A, Young A, Williams E, Raftery J. Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort. BMC Med Res Methodol 2015; 15:108. [PMID: 26715462 PMCID: PMC4696153 DOI: 10.1186/s12874-015-0102-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2015] [Accepted: 12/11/2015] [Indexed: 11/12/2022] Open
Abstract
Background Chalmers and Glasziou’s paper published in 2014 recommends research funding bodies should mandate that proposals for additional primary research are built on systematic reviews of existing evidence showing what is already known. Jones et al. identified 11 (23 %) of 48 trials funded during 2006–8 by the National Institute for Health Research Health Technology Assessment (NIHR HTA) Programme did not reference a systematic review. This study did not explore the reasons for trials not referencing a systematic review or consider trials using any other evidence in the absence of a systematic review. Referencing a systematic review may not be possible in certain circumstances, for instance if the systematic review does not address the question being proposed in the trial. The current study extended Jones’ study by exploring the reasons for why trials did not reference a systematic review and included a more recent cohort of trials funded in 2013 to determine if there were any changes in the referencing or use of systematic reviews. Methods Two cohorts of NIHR HTA randomised controlled trials were included. Cohort I included the same trials as Jones et al. (with the exception of one trial which was discontinued). Cohort II included NIHR HTA trials funded in 2013. Data extraction was undertaken independently by two reviewers using full applications and trial protocols. Descriptive statistics was used and no formal statistical analyses were conducted. Results Five (11 %) trials of the 47 funded during 2006–2008 did not reference a systematic review. These 5 trials had warranted reasons for not referencing systematic reviews. All trials from Cohort II referenced a systematic review. A quarter of all those trials with a preceding systematic review used a different primary outcome than those stated in the reviews. Conclusions The NIHR requires that proposals for new primary research are justified by existing evidence and the findings of this study confirm the adherence to this requirement with a high rate of applications using systematic reviews. Electronic supplementary material The online version of this article (doi:10.1186/s12874-015-0102-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Sheetal Bhurke
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Andrew Cook
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK. .,University of Southampton and University Hospital Southampton NHS Foundation Trusts, Southampton, UK.
| | - Anna Tallant
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Amanda Young
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Elaine Williams
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - James Raftery
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK. .,University of Southampton and University Hospital Southampton NHS Foundation Trusts, Southampton, UK.
| |
Collapse
|
35
|
Abdulatif M, Mukhtar A, Obayah G. Pitfalls in reporting sample size calculation in randomized controlled trials published in leading anaesthesia journals: a systematic review. Br J Anaesth 2015; 115:699-707. [DOI: 10.1093/bja/aev166] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
|
36
|
Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial. PLoS One 2015; 10:e0132578. [PMID: 26173007 PMCID: PMC4501786 DOI: 10.1371/journal.pone.0132578] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Accepted: 06/17/2015] [Indexed: 12/05/2022] Open
Abstract
We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT). In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review). Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was < 60%, as compared with the 80% nominal power); 41%, 16% and 6%, respectively, were overpowered (i.e., with real power > 90%). Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Collapse
|
37
|
Halme AS, Fritel X, Benedetti A, Eng K, Tannenbaum C. Implications of the minimal clinically important difference for health-related quality-of-life outcomes: a comparison of sample size requirements for an incontinence treatment trial. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2015; 18:292-298. [PMID: 25773565 DOI: 10.1016/j.jval.2014.11.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Revised: 09/10/2014] [Accepted: 11/13/2014] [Indexed: 06/04/2023]
Abstract
BACKGROUND Sample size calculations for treatment trials that aim to assess health-related quality-of-life (HRQOL) outcomes are often difficult to perform. Researchers must select a target minimal clinically important difference (MCID) in HRQOL for the trial, estimate the effect size of the intervention, and then consider the responsiveness of different HRQOL measures for detecting improvements. Generic preference-based HRQOL measures are usually less sensitive to gains in HRQOL than are disease-specific measures, but are nonetheless recommended to quantify an impact on HRQOL that can be translated into quality-adjusted life-years during cost-effectiveness analyses. Mapping disease-specific measures onto generic measures is a proposed method for yielding more efficient sample size requirements while retaining the ability to generate utility weights for cost-effectiveness analyses. OBJECTIVES This study sought to test this mapping strategy to calculate and compare the effect on sample size of three different methods. METHODS Three different methods were used for determining an MCID in HRQOL in patients with incontinence: 1) a global rating of improvement, 2) an incontinence-specific HRQOL instrument, and 3) a generic preference-based HRQOL instrument using mapping coefficients. RESULTS The sample size required to detect a 20% difference in the MCID for the global rating of improvement was 52 per trial arm, 172 per arm for the incontinence-specific HRQOL outcome, and 500 per arm for the generic preference-based HRQOL outcome. CONCLUSIONS We caution that treatment trials of conditions for which improvements are not easy to measure on generic HRQOL instruments will still require significantly greater sample size even when mapping functions are used to try to gain efficiency.
Collapse
Affiliation(s)
- Alex S Halme
- Faculty of Medicine, University of Montreal, Montreal, QC, Canada
| | - Xavier Fritel
- Faculty of Medicine and Pharmacy, University of Poitiers, Poitiers, France
| | - Andrea Benedetti
- Departments of Medicine, Biostatistics and Occupational Health, McGill University, Montreal, Quebec, Canada; Departments of Epidemiology, Biostatistics and Occupational Health, McGill University, Montreal, Quebec, Canada; Respiratory Epidemiology and Clinical Research Unit, McGill University Health Center, Montreal, QC, Canada
| | - Ken Eng
- Independent Consultant, Ottawa, ON, Canada
| | - Cara Tannenbaum
- Faculties of Medicine and Pharmacy, University of Montreal, Montreal, QC, Canada.
| |
Collapse
|
38
|
Billoir E, Navratil V, Blaise BJ. Sample size calculation in metabolic phenotyping studies. Brief Bioinform 2015; 16:813-9. [PMID: 25600654 DOI: 10.1093/bib/bbu052] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2014] [Indexed: 01/07/2023] Open
Abstract
The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki.
Collapse
|
39
|
Cook JA, Hislop J, Altman DG, Fayers P, Briggs AH, Ramsay CR, Norrie JD, Harvey IM, Buckley B, Fergusson D, Ford I, Vale LD. Specifying the target difference in the primary outcome for a randomised controlled trial: guidance for researchers. Trials 2015; 16:12. [PMID: 25928502 PMCID: PMC4302137 DOI: 10.1186/s13063-014-0526-8] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 12/19/2014] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Central to the design of a randomised controlled trial is the calculation of the number of participants needed. This is typically achieved by specifying a target difference and calculating the corresponding sample size, which provides reassurance that the trial will have the required statistical power (at the planned statistical significance level) to identify whether a difference of a particular magnitude exists. Beyond pure statistical or scientific concerns, it is ethically imperative that an appropriate number of participants should be recruited. Despite the critical role of the target difference for the primary outcome in the design of randomised controlled trials, its determination has received surprisingly little attention. This article provides guidance on the specification of the target difference for the primary outcome in a sample size calculation for a two parallel group randomised controlled trial with a superiority question. METHODS This work was part of the DELTA (Difference ELicitation in TriAls) project. Draft guidance was developed by the project steering and advisory groups utilising the results of the systematic review and surveys. Findings were circulated and presented to members of the combined group at a face-to-face meeting, along with a proposed outline of the guidance document structure, containing recommendations and reporting items for a trial protocol and report. The guidance and was subsequently drafted and circulated for further comment before finalisation. RESULTS Guidance on specification of a target difference in the primary outcome for a two group parallel randomised controlled trial was produced. Additionally, a list of reporting items for protocols and trial reports was generated. CONCLUSIONS Specification of the target difference for the primary outcome is a key component of a randomized controlled trial sample size calculation. There is a need for better justification of the target difference and reporting of its specification.
Collapse
Affiliation(s)
- Jonathan A Cook
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Botnar Research Centre, Nuffield Orthopaedic Centre, Windmill Road, Oxford, OX3 7LD, UK.
- Health Services Research Unit, University of Aberdeen, Health Sciences Building, Foresthill, Aberdeen, AB25 2ZD, UK.
| | - Jenni Hislop
- Institute of Health and Society, Newcastle University, The Baddiley-Clark Building, Richardson Road, Newcastle upon Tyne, NE2 4AX, UK.
| | - Douglas G Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Botnar Research Centre, Nuffield Orthopaedic Centre, Windmill Road, Oxford, OX3 7LD, UK.
| | - Peter Fayers
- Population Health, University of Aberdeen, Polwarth Building, Foresterhill, Aberdeen, AB25 2ZD, UK.
- Department of Cancer Research and Molecular, Norwegian University of Science and Technology, Mailbox 8905, Trondheim, N-7491, Norway.
| | - Andrew H Briggs
- Health Economics and Health Technology Assessment, University of Glasgow, 1 Lilybank Gardens, Glasgow, G12 8RZ, UK.
| | - Craig R Ramsay
- Health Services Research Unit, University of Aberdeen, Health Sciences Building, Foresthill, Aberdeen, AB25 2ZD, UK.
| | - John D Norrie
- Centre for Healthcare Randomised Trials (CHaRT), University of Aberdeen, Health Sciences Building, Aberdeen, AB25 2ZD, UK.
| | - Ian M Harvey
- Faculty of Medicine and Health Sciences, University of East Anglia, Elizabeth Fry Building, Norwich Research Park, Norwich, NR4 7TJ, UK.
| | - Brian Buckley
- National University of Ireland, University Road, Galway, Ireland.
| | - Dean Fergusson
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, 725 Parkdale Avenue, Ottawa, ON, K1Y 4E9, Canada.
| | - Ian Ford
- Robertson Centre for Biostatistics, University of Glasgow, Boyd Orr Building, University Avenue, Glasgow, G12 8QQ, UK.
| | - Luke D Vale
- Institute of Health and Society, Newcastle University, The Baddiley-Clark Building, Richardson Road, Newcastle upon Tyne, NE2 4AX, UK.
| |
Collapse
|
40
|
McKeown A, Gewandter JS, McDermott MP, Pawlowski JR, Poli JJ, Rothstein D, Farrar JT, Gilron I, Katz NP, Lin AH, Rappaport BA, Rowbotham MC, Turk DC, Dworkin RH, Smith SM. Reporting of sample size calculations in analgesic clinical trials: ACTTION systematic review. THE JOURNAL OF PAIN 2014; 16:199-206.e1-7. [PMID: 25481494 DOI: 10.1016/j.jpain.2014.11.010] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 10/03/2014] [Revised: 11/10/2014] [Accepted: 11/13/2014] [Indexed: 11/29/2022]
Abstract
UNLABELLED Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. PERSPECTIVE In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size.
Collapse
Affiliation(s)
- Andrew McKeown
- Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Jennifer S Gewandter
- Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Michael P McDermott
- Department of Biostatistics and Computational Biology, University of Rochester School of Medicine and Dentistry, Rochester, New York; Department of Neurology, University of Rochester School of Medicine and Dentistry, Rochester, New York; Department of Center for Human Experimental Therapeutics, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Joseph R Pawlowski
- Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Joseph J Poli
- Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Daniel Rothstein
- Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - John T Farrar
- University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ian Gilron
- Queen's University, Kingston, Ontario, Canada
| | - Nathaniel P Katz
- Analgesic Solutions, Natick, Massachusetts; Department of Anesthesiology, Tufts University, Boston, Massachusetts
| | - Allison H Lin
- Center for Drug Evaluation and Research, United States Food and Drug Administration, Silver Spring, Maryland
| | - Bob A Rappaport
- Center for Drug Evaluation and Research, United States Food and Drug Administration, Silver Spring, Maryland
| | | | - Dennis C Turk
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, Washington
| | - Robert H Dworkin
- Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, Rochester, New York; Department of Neurology, University of Rochester School of Medicine and Dentistry, Rochester, New York; Department of Center for Human Experimental Therapeutics, University of Rochester School of Medicine and Dentistry, Rochester, New York
| | - Shannon M Smith
- Department of Anesthesiology, University of Rochester School of Medicine and Dentistry, Rochester, New York.
| |
Collapse
|
41
|
Molina KI, Ricci NA, de Moraes SA, Perracini MR. Virtual reality using games for improving physical functioning in older adults: a systematic review. J Neuroeng Rehabil 2014; 11:156. [PMID: 25399408 PMCID: PMC4247561 DOI: 10.1186/1743-0003-11-156] [Citation(s) in RCA: 96] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2014] [Accepted: 10/31/2014] [Indexed: 11/10/2022] Open
Abstract
The use of virtual reality through exergames or active video game, i.e. a new form of interactive gaming, as a complementary tool in rehabilitation has been a frequent focus in research and clinical practice in the last few years. However, evidence of their effectiveness is scarce in the older population. This review aim to provide a summary of the effects of exergames in improving physical functioning in older adults. A search for randomized controlled trials was performed in the databases EMBASE, MEDLINE, PsyInfo, Cochrane data base, PEDro and ISI Web of Knowledge. Results from the included studies were analyzed through a critical review and methodological quality by the PEDro scale. Thirteen studies were included in the review. The most common apparatus for exergames intervention was the Nintendo Wii gaming console (8 studies), followed by computers games, Dance video game with pad (two studies each) and only one study with the Balance Rehabilitation Unit. The Timed Up and Go was the most frequently used instrument to assess physical functioning (7 studies). According to the PEDro scale, most of the studies presented methodological problems, with a high proportion of scores below 5 points (8 studies). The exergames protocols and their duration varied widely, and the benefits for physical function in older people remain inconclusive. However, a consensus between studies is the positive motivational aspect that the use of exergames provides. Further studies are needed in order to achieve better methodological quality, external validity and provide stronger scientific evidence.
Collapse
Affiliation(s)
- Karina Iglesia Molina
- Master's and Doctoral Programs in Physical Therapy, Universidade Cidade de São Paulo - UNICID, Rua Cesáreo Galeno, 448, 03071-000 Tatuapé, SP, Brazil.
| | | | | | | |
Collapse
|
42
|
Molina KI, Ricci NA, de Moraes SA, Perracini MR. Virtual reality using games for improving physical functioning in older adults: a systematic review. J Neuroeng Rehabil 2014. [PMID: 25399408 DOI: 10.1186/1743-0003-11-156.] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
The use of virtual reality through exergames or active video game, i.e. a new form of interactive gaming, as a complementary tool in rehabilitation has been a frequent focus in research and clinical practice in the last few years. However, evidence of their effectiveness is scarce in the older population. This review aim to provide a summary of the effects of exergames in improving physical functioning in older adults. A search for randomized controlled trials was performed in the databases EMBASE, MEDLINE, PsyInfo, Cochrane data base, PEDro and ISI Web of Knowledge. Results from the included studies were analyzed through a critical review and methodological quality by the PEDro scale. Thirteen studies were included in the review. The most common apparatus for exergames intervention was the Nintendo Wii gaming console (8 studies), followed by computers games, Dance video game with pad (two studies each) and only one study with the Balance Rehabilitation Unit. The Timed Up and Go was the most frequently used instrument to assess physical functioning (7 studies). According to the PEDro scale, most of the studies presented methodological problems, with a high proportion of scores below 5 points (8 studies). The exergames protocols and their duration varied widely, and the benefits for physical function in older people remain inconclusive. However, a consensus between studies is the positive motivational aspect that the use of exergames provides. Further studies are needed in order to achieve better methodological quality, external validity and provide stronger scientific evidence.
Collapse
Affiliation(s)
- Karina Iglesia Molina
- Master's and Doctoral Programs in Physical Therapy, Universidade Cidade de São Paulo - UNICID, Rua Cesáreo Galeno, 448, 03071-000 Tatuapé, SP, Brazil.
| | | | | | | |
Collapse
|
43
|
Affiliation(s)
- Mahesh K B Parmar
- MRC Clinical Trials Unit, University College London, London WC2B 6NH, UK
| | - James Carpenter
- MRC Clinical Trials Unit, University College London, London WC2B 6NH, UK; London School of Hygiene & Tropical Medicine, London, UK
| | - Matthew R Sydes
- MRC Clinical Trials Unit, University College London, London WC2B 6NH, UK.
| |
Collapse
|
44
|
Clark T, Davies H, Mansmann U. Five questions that need answering when considering the design of clinical trials. Trials 2014; 15:286. [PMID: 25027292 PMCID: PMC4223595 DOI: 10.1186/1745-6215-15-286] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 07/01/2014] [Indexed: 12/22/2022] Open
Abstract
Evidence suggests that research protocols often lack important information on study design, which hinders external review. The study protocol should provide an adequate explanation for why the proposed study methodology is appropriate for the question posed, why the study design is likely to answer the research question, and why it is the best approach. It is especially important that researchers explain why the treatment difference sought is worthwhile to patients, and they should reference consultations with the public and patient groups and existing literature. Moreover, the study design should be underpinned by a systematic review of the existing evidence, which should be included in the research protocol. The Health Research Authority in collaboration with partners has published guidance entitled 'Specific questions that need answering when considering the design of clinical trials'. The guidance will help those designing research and those reviewing it to address key issues.
Collapse
Affiliation(s)
- Timothy Clark
- Institut für Medizinische Informationsverarbeitung, Biometrie und Epidemiologie (IBE), Faculty of Medicine, Ludwig-Maximilians University, Munich, Germany
| | - Hugh Davies
- Health Research Authority, Skipton House, London SE1 6LH, UK
| | - Ulrich Mansmann
- Institut für Medizinische Informationsverarbeitung, Biometrie und Epidemiologie (IBE), Faculty of Medicine, Ludwig-Maximilians University, Munich, Germany
| |
Collapse
|
45
|
Teare MD, Dimairo M, Shephard N, Hayman A, Whitehead A, Walters SJ. Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study. Trials 2014; 15:264. [PMID: 24993581 PMCID: PMC4227298 DOI: 10.1186/1745-6215-15-264] [Citation(s) in RCA: 358] [Impact Index Per Article: 35.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Accepted: 06/20/2014] [Indexed: 12/03/2022] Open
Abstract
Background External pilot or feasibility studies can be used to estimate key unknown parameters to inform the design of the definitive randomised controlled trial (RCT). However, there is little consensus on how large pilot studies need to be, and some suggest inflating estimates to adjust for the lack of precision when planning the definitive RCT. Methods We use a simulation approach to illustrate the sampling distribution of the standard deviation for continuous outcomes and the event rate for binary outcomes. We present the impact of increasing the pilot sample size on the precision and bias of these estimates, and predicted power under three realistic scenarios. We also illustrate the consequences of using a confidence interval argument to inflate estimates so the required power is achieved with a pre-specified level of confidence. We limit our attention to external pilot and feasibility studies prior to a two-parallel-balanced-group superiority RCT. Results For normally distributed outcomes, the relative gain in precision of the pooled standard deviation (SDp) is less than 10% (for each five subjects added per group) once the total sample size is 70. For true proportions between 0.1 and 0.5, we find the gain in precision for each five subjects added to the pilot sample is less than 5% once the sample size is 60. Adjusting the required sample sizes for the imprecision in the pilot study estimates can result in excessively large definitive RCTs and also requires a pilot sample size of 60 to 90 for the true effect sizes considered here. Conclusions We recommend that an external pilot study has at least 70 measured subjects (35 per group) when estimating the SDp for a continuous outcome. If the event rate in an intervention group needs to be estimated by the pilot then a total of 60 to 100 subjects is required. Hence if the primary outcome is binary a total of at least 120 subjects (60 in each group) may be required in the pilot trial. It is very much more efficient to use a larger pilot study, than to guard against the lack of precision by using inflated estimates.
Collapse
Affiliation(s)
- M Dawn Teare
- Design, Trials and Statistics Group, School of Health and Related Research, University of Sheffield, Regent Court, 30 Regent Street, S1 4DA Sheffield, UK.
| | | | | | | | | | | |
Collapse
|
46
|
Bell ML, Teixeira-Pinto A, McKenzie JE, Olivier J. A myriad of methods: Calculated sample size for two proportions was dependent on the choice of sample size formula and software. J Clin Epidemiol 2014; 67:601-5. [DOI: 10.1016/j.jclinepi.2013.10.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Revised: 10/11/2013] [Accepted: 10/21/2013] [Indexed: 10/25/2022]
|
47
|
Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JPA, Oliver S. How to increase value and reduce waste when research priorities are set. Lancet 2014; 383:156-65. [PMID: 24411644 DOI: 10.1016/s0140-6736(13)62229-1] [Citation(s) in RCA: 871] [Impact Index Per Article: 87.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The increase in annual global investment in biomedical research--reaching US$240 billion in 2010--has resulted in important health dividends for patients and the public. However, much research does not lead to worthwhile achievements, partly because some studies are done to improve understanding of basic mechanisms that might not have relevance for human health. Additionally, good research ideas often do not yield the anticipated results. As long as the way in which these ideas are prioritised for research is transparent and warranted, these disappointments should not be deemed wasteful; they are simply an inevitable feature of the way science works. However, some sources of waste cannot be justified. In this report, we discuss how avoidable waste can be considered when research priorities are set. We have four recommendations. First, ways to improve the yield from basic research should be investigated. Second, the transparency of processes by which funders prioritise important uncertainties should be increased, making clear how they take account of the needs of potential users of research. Third, investment in additional research should always be preceded by systematic assessment of existing evidence. Fourth, sources of information about research that is in progress should be strengthened and developed and used by researchers. Research funders have primary responsibility for reductions in waste resulting from decisions about what research to do.
Collapse
Affiliation(s)
| | - Michael B Bracken
- School of Public Health and School of Medicine, Yale University, New Haven, CT, USA
| | - Ben Djulbegovic
- Center for Evidence-Based Medicine and Health Outcomes Research, Division of Internal Medicine, University of South Florida, Tampa, FL, USA; Department of Hematology and Department of Health Outcomes and Behavior, H Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - Silvio Garattini
- Istituto di Ricovero e Cura a Carattere Scientifico Istituto di Ricerche Farmacologiche Mario Negri, Milan, Italy
| | | | - A Metin Gülmezoglu
- UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), WHO, Geneva, Switzerland
| | - David W Howells
- Florey Institute of Neuroscience and Mental Health, Melbourne, VIC, Australia
| | - John P A Ioannidis
- Stanford Prevention Research Center, Department of Medicine, School of Medicine, Stanford University, Stanford, CA, USA; Division of Epidemiology, Department of Health Research and Policy, School of Medicine, Stanford University, Stanford, CA, USA; Department of Statistics, School of Humanities and Sciences, Stanford University, Stanford, CA, USA; Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
| | | |
Collapse
|