1
|
Cachexia and Sarcopenia in Oligometastatic Non-Small Cell Lung Cancer: Making a Potential Curable Disease Incurable? Cancers (Basel) 2024; 16:230. [PMID: 38201657 PMCID: PMC10777972 DOI: 10.3390/cancers16010230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 12/25/2023] [Accepted: 01/01/2024] [Indexed: 01/12/2024] Open
Abstract
Among patients with advanced NSCLC, there is a group of patients with synchronous oligometastatic disease (sOMD), defined as a limited number of metastases detected at the time of diagnosis. As cachexia and sarcopenia are linked to poor survival, incorporating this information could assist clinicians in determining whether a radical treatment should be administered. In a retrospective multicenter study, including all patients with adequately staged (FDG-PET, brain imaging) sOMD according to the EORTC definition, we aimed to assess the relationship between cachexia and/or sarcopenia and survival. Of the 439 patients that were identified between 2015 and 2021, 234 met the criteria for inclusion and were included. The median age of the cohort was 67, 52.6% were male, and the median number of metastasis was 1. Forty-six (19.7%) patients had cachexia, thirty-four (14.5%) had sarcopenia and twenty-one (9.0%) had both. With a median follow-up of 49.7 months, median PFS and OS were 8.6 and 17.3 months, respectively. Moreover, a trend toward longer PFS was found in patients without cachexia and sarcopenia compared to those with cachexia and/or sarcopenia. In multivariate analysis, cachexia and sarcopenia were not associated with an inferior survival, irrespective of receiving radical treatment. High CRP was associated with inferior survival and could be a prognostic factor, helping the decision of clinicians in selecting patients who may benefit from the addition of LRT. However, despite the homogeneous definition of oligometastatic disease and the adequate staging, our subgroups were small. Therefore, further studies are needed to better understand our hypothesis and generating findings.
Collapse
|
2
|
Abstract
BACKGROUND/AIMS Tuberculosis remains one of the leading causes of death from an infectious disease globally. Both choices of outcome definitions and approaches to handling events happening post-randomisation can change the treatment effect being estimated, but these are often inconsistently described, thus inhibiting clear interpretation and comparison across trials. METHODS Starting from the ICH E9(R1) addendum's definition of an estimand, we use our experience of conducting large Phase III tuberculosis treatment trials and our understanding of the estimand framework to identify the key decisions regarding how different event types are handled in the primary outcome definition, and the important points that should be considered in making such decisions. A key issue is the handling of intercurrent (i.e. post-randomisation) events (ICEs) which affect interpretation of or preclude measurement of the intended final outcome. We consider common ICEs including treatment changes and treatment extension, poor adherence to randomised treatment, re-infection with a new strain of tuberculosis which is different from the original infection, and death. We use two completed tuberculosis trials (REMoxTB and STREAM Stage 1) as illustrative examples. These trials tested non-inferiority of new tuberculosis treatment regimens versus a control regimen. The primary outcome was a binary composite endpoint, 'favourable' or 'unfavourable', which was constructed from several components. RESULTS We propose the following improvements in handling the above-mentioned ICEs and loss to follow-up (a post-randomisation event that is not in itself an ICE). First, changes to allocated regimens should not necessarily be viewed as an unfavourable outcome; from the patient perspective, the potential harms associated with a change in the regimen should instead be directly quantified. Second, handling poor adherence to randomised treatment using a per-protocol analysis does not necessarily target a clear estimand; instead, it would be desirable to develop ways to estimate the treatment effects more relevant to programmatic settings. Third, re-infection with a new strain of tuberculosis could be handled with different strategies, depending on whether the outcome of interest is the ability to attain culture negativity from infection with any strain of tuberculosis, or specifically the presenting strain of tuberculosis. Fourth, where possible, death could be separated into tuberculosis-related and non-tuberculosis-related and handled using appropriate strategies. Finally, although some losses to follow-up would result in early treatment discontinuation, patients lost to follow-up before the end of the trial should not always be classified as having an unfavourable outcome. Instead, loss to follow-up should be separated from not completing the treatment, which is an ICE and may be considered as an unfavourable outcome. CONCLUSION The estimand framework clarifies many issues in tuberculosis trials but also challenges trialists to justify and improve their outcome definitions. Future trialists should consider all the above points in defining their outcomes.
Collapse
|
3
|
A Method to Estimate the Efficacy vs. Effectiveness in Meta-Analysis of Clinical Trials with Different Adherence Scenarios: A Monte Carlo Simulation Study in Nutrition. Nutrients 2021; 13:nu13072352. [PMID: 34371861 PMCID: PMC8308700 DOI: 10.3390/nu13072352] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 07/06/2021] [Accepted: 07/07/2021] [Indexed: 12/13/2022] Open
Abstract
Randomized clinical trials (RCTs) evaluating the effectiveness of interventions to promote fruit and vegetable (FV) consumption usually report intention-to-treat (ITT) analysis as the main outcome. These analyses compare the randomly assigned groups and accept that some individuals may not follow the recommendations received in their group. The ITT analysis is useful to quantify the global effect of promoting the consumption of FV in a population (effectiveness) but, if non-adherence is significant in the RCT, they cannot estimate the specific effect in the individuals that increased their FV consumption (efficacy). To calculate the efficacy of FV consumption, a per protocol analysis (PP) would have to be carried out, in which groups of individuals are compared according to their actual adherence to FV consumption, regardless of the group to which they were assigned; unfortunately, many RCTs do not report the PP analysis. The objective of this article is to apply a new method to estimate the efficacy of Meta-analysis (MA) PP which include RCTs of effectiveness by ITT, without estimates of adherence. The method is based on generating Monte Carlo simulations of percentages of adherence in each allocation group from prior distributions informed by expert knowledge. We illustrate the method reanalyzing a Cochrane Systematic Review (SR) of RCTs on increased FV consumption reported with ITT, simulating 1000 times the estimation of a PP meta-analyses, and obtaining means and ranges of the potential PP effects. In some cases, the range of estimated PP effects was clearly more favourable than the effect calculated with the original ITT assumption, and therefore this corrected analysis must be considered when estimating the true effect of the consumption of a certain food.
Collapse
|
4
|
Treatment of missing data in follow-up studies of randomised controlled trials: A systematic review of the literature. Clin Trials 2017; 14:387-395. [PMID: 28385071 DOI: 10.1177/1740774517703319] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
BACKGROUND/AIMS After completion of a randomised controlled trial, an extended follow-up period may be initiated to learn about longer term impacts of the intervention. Since extended follow-up studies often involve additional eligibility restrictions and consent processes for participation, and a longer duration of follow-up entails a greater risk of participant attrition, missing data can be a considerable threat in this setting. As a potential source of bias, it is critical that missing data are appropriately handled in the statistical analysis, yet little is known about the treatment of missing data in extended follow-up studies. The aims of this review were to summarise the extent of missing data in extended follow-up studies and the use of statistical approaches to address this potentially serious problem. METHODS We performed a systematic literature search in PubMed to identify extended follow-up studies published from January to June 2015. Studies were eligible for inclusion if the original randomised controlled trial results were also published and if the main objective of extended follow-up was to compare the original randomised groups. We recorded information on the extent of missing data and the approach used to treat missing data in the statistical analysis of the primary outcome of the extended follow-up study. RESULTS Of the 81 studies included in the review, 36 (44%) reported additional eligibility restrictions and 24 (30%) consent processes for entry into extended follow-up. Data were collected at a median of 7 years after randomisation. Excluding 28 studies with a time to event primary outcome, 51/53 studies (96%) reported missing data on the primary outcome. The median percentage of randomised participants with complete data on the primary outcome was just 66% in these studies. The most common statistical approach to address missing data was complete case analysis (51% of studies), while likelihood-based analyses were also well represented (25%). Sensitivity analyses around the missing data mechanism were rarely performed (25% of studies), and when they were, they often involved unrealistic assumptions about the mechanism. CONCLUSION Despite missing data being a serious problem in extended follow-up studies, statistical approaches to addressing missing data were often inadequate. We recommend researchers clearly specify all sources of missing data in follow-up studies and use statistical methods that are valid under a plausible assumption about the missing data mechanism. Sensitivity analyses should also be undertaken to assess the robustness of findings to assumptions about the missing data mechanism.
Collapse
|
5
|
Should multiple imputation be the method of choice for handling missing data in randomized trials? Stat Methods Med Res 2016; 27:2610-2626. [PMID: 28034175 PMCID: PMC5393436 DOI: 10.1177/0962280216683570] [Citation(s) in RCA: 160] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
The use of multiple imputation has increased markedly in recent years, and journal reviewers may expect to see multiple imputation used to handle missing data. However in randomized trials, where treatment group is always observed and independent of baseline covariates, other approaches may be preferable. Using data simulation we evaluated multiple imputation, performed both overall and separately by randomized group, across a range of commonly encountered scenarios. We considered both missing outcome and missing baseline data, with missing outcome data induced under missing at random mechanisms. Provided the analysis model was correctly specified, multiple imputation produced unbiased treatment effect estimates, but alternative unbiased approaches were often more efficient. When the analysis model overlooked an interaction effect involving randomized group, multiple imputation produced biased estimates of the average treatment effect when applied to missing outcome data, unless imputation was performed separately by randomized group. Based on these results, we conclude that multiple imputation should not be seen as the only acceptable way to handle missing data in randomized trials. In settings where multiple imputation is adopted, we recommend that imputation is carried out separately by randomized group.
Collapse
|
6
|
Best (but oft-forgotten) practices: intention-to-treat, treatment adherence, and missing participant outcome data in the nutrition literature. Am J Clin Nutr 2016; 104:1197-1201. [PMID: 27733397 DOI: 10.3945/ajcn.115.123315] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Accepted: 08/31/2016] [Indexed: 11/14/2022] Open
Abstract
Among clinical trials of adequate size, randomization balances both known and unknown prognostic factors between trial arms, thus allowing an unbiased comparison of intervention and control. To preserve this benefit, all randomly assigned participants should be followed to study termination and analyzed in the arm to which they were allocated. There are 2 potential limitations in study implementation: 1) patients are nonadherent and continue with follow-up visits, or 2) patients are lost to follow-up and their outcome data are missing. Herein, we address these issues with an emphasis on binary outcomes, and discuss how authors of randomized trials should address issues of both noncompliance and missing data.
Collapse
|
7
|
Theory-Driven Process Evaluation of the SHINE Trial Using a Program Impact Pathway Approach. Clin Infect Dis 2016; 61 Suppl 7:S752-8. [PMID: 26602304 PMCID: PMC4657588 DOI: 10.1093/cid/civ716] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
Two reasons for the lack of success of programs or interventions are poor alignment of interventions with the causes of the problem targeted by the intervention, leading to poor efficacy (theory failure), and failure to implement interventions as designed (program failure). These failures are important for both public health programs and randomized trials. In the Sanitation Hygiene and Infant Nutrition Efficacy (SHINE) Trial, we utilize the program impact pathway (PIP) approach to track intervention implementation and behavior uptake. In this article, we present the SHINE PIP including definitions and measurements of key mediating domains, and discuss the implications of this approach for randomized trials. Operationally, the PIP can be used for monitoring and strengthening intervention delivery, facilitating course-correction at various stages of implementation. Analytically, the PIP can facilitate a richer understanding of the mediating and modifying determinants of intervention impact than would be possible from an intention-to-treat analysis alone.
Collapse
|
8
|
Sodium glucose co-transporter 2 inhibitors for glycemic control in type 2 diabetes mellitus: Quality of reporting of randomized controlled trials. Perspect Clin Res 2016; 7:21-7. [PMID: 26955572 PMCID: PMC4763513 DOI: 10.4103/2229-3485.173777] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Background: Sodium glucose co-transporter 2 inhibitors represent a novel class of antidiabetic drugs. The reporting quality of the trials evaluating the efficacy of these agents for glycemic control in type 2 diabetes mellitus has not been explored. Our aim was to assess the reporting quality of such randomized controlled trials (RCTs) and to identify the predictors of reporting quality. Materials and Methods: A systematic literature search was conducted for RCTs published till 12 June 2014. Two independent investigators carried out the searches and assessed the reporting quality on three parameters: Overall quality score (OQS) using Consolidated Standards of Reporting Trials (CONSORT) 2010 statement, Jadad score and intention to treat analysis. Inter-rater agreements were compared using Cohen's weighted kappa statistic. Multivariable linear regression analysis was used to identify the predictors. Results: Thirty-seven relevant RCTs were included in the present analysis. The median OQS was 17 with a range from 8 to 21. On Jadad scale, the median score was three with a range from 0 to 5. Complete details about allocation concealment and blinding were present in 21 and 10 studies respectively. Most studies lacked an elaborate discussion on trial limitations and generalizability. Among the factors identified as significantly associated with reporting quality were the publishing journal and region of conduct of RCT. Conclusions: The key methodological items remain poorly reported in most studies. Strategies like stricter adherence to CONSORT guidelines by journals, access to full trial protocols to gain valuable information and full collaboration among investigators and methodologists might prove helpful in improving the quality of published RCT reports.
Collapse
|
9
|
The Choice of Analytical Strategies in Inverse-Probability-of-Treatment-Weighted Analysis: A Simulation Study. Am J Epidemiol 2015; 182:520-7. [PMID: 26316599 DOI: 10.1093/aje/kwv098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2014] [Accepted: 04/06/2015] [Indexed: 12/26/2022] Open
Abstract
We sought to explore the impact of intention to treat and complex treatment use assumptions made during weight construction on the validity and precision of estimates derived from inverse-probability-of-treatment-weighted analysis. We simulated data assuming a nonexperimental design that attempted to quantify the effect of statin on lowering low-density lipoprotein cholesterol. We created 324 scenarios by varying parameter values (effect size, sample size, adherence level, probability of treatment initiation, associations between low-density lipoprotein cholesterol and treatment initiation and continuation). Four analytical approaches were used: 1) assuming intention to treat; 2) assuming complex mechanisms of treatment use; 3) assuming a simple mechanism of treatment use; and 4) assuming invariant confounders. With a continuous outcome, estimates assuming intention to treat were biased toward the null when there were nonnull treatment effect and nonadherence after treatment initiation. For each 1% decrease in the proportion of patients staying on treatment after initiation, the bias in estimated average treatment effect increased by 1%. Inverse-probability-of-treatment-weighted analyses that took into account the complex mechanisms of treatment use generated approximately unbiased estimates. Studies estimating the actual effect of a time-varying treatment need to consider the complex mechanisms of treatment use during weight construction.
Collapse
|
10
|
Abstract
Background: The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. Methods: The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. Results: We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Conclusion: Randomisation errors are almost inevitable and should be reported in trial publications. The intention-to-treat principle is useful for guiding responses to randomisation errors when they are discovered.
Collapse
|
11
|
A causal model for longitudinal randomised trials with time-dependent non-compliance. Stat Med 2015; 34:2019-34. [PMID: 25778798 PMCID: PMC4672693 DOI: 10.1002/sim.6468] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Revised: 02/10/2015] [Accepted: 02/13/2015] [Indexed: 11/07/2022]
Abstract
In the presence of non-compliance, conventional analysis by intention-to-treat provides an unbiased comparison of treatment policies but typically under-estimates treatment efficacy. With all-or-nothing compliance, efficacy may be specified as the complier-average causal effect (CACE), where compliers are those who receive intervention if and only if randomised to it. We extend the CACE approach to model longitudinal data with time-dependent non-compliance, focusing on the situation in which those randomised to control may receive treatment and allowing treatment effects to vary arbitrarily over time. Defining compliance type to be the time of surgical intervention if randomised to control, so that compliers are patients who would not have received treatment at all if they had been randomised to control, we construct a causal model for the multivariate outcome conditional on compliance type and randomised arm. This model is applied to the trial of alternative regimens for glue ear treatment evaluating surgical interventions in childhood ear disease, where outcomes are measured over five time points, and receipt of surgical intervention in the control arm may occur at any time. We fit the models using Markov chain Monte Carlo methods to obtain estimates of the CACE at successive times after receiving the intervention. In this trial, over a half of those randomised to control eventually receive intervention. We find that surgery is more beneficial than control at 6months, with a small but non-significant beneficial effect at 12months.
Collapse
|
12
|
Rational Helicobacter pylori therapy: evidence-based medicine rather than medicine-based evidence. Clin Gastroenterol Hepatol 2014; 12:177-86.e3; Discussion e12-3. [PMID: 23751282 PMCID: PMC3830667 DOI: 10.1016/j.cgh.2013.05.028] [Citation(s) in RCA: 249] [Impact Index Per Article: 24.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2013] [Revised: 05/07/2013] [Accepted: 05/15/2013] [Indexed: 02/07/2023]
Abstract
Data are available such that choice of Helicobacter pylori therapy for an individual patient can be reliably predicted. Here, treatment success is defined as a cure rate of 90% or greater. Treatment outcome in a population or a patient can be calculated based on the effectiveness of a regimen for infections with susceptible and with resistant strains coupled with the knowledge of the prevalence of resistance (ie, based on formal measurement, clinical experience, or both). We provide the formula for predicting outcome and we illustrate the calculations. Because clarithromycin-containing triple therapy and 10-day sequential therapy are now only effective in special populations, they are considered obsolete; neither should continue to be used as empiric therapies (ie, 7- and 14-day triple therapies fail when clarithromycin resistance exceeds 5% and 15%, respectively, and 10-day sequential therapy fails when metronidazole resistance exceeds 20%). Therapy should be individualized based on prior history and whether the patient is in a high-risk group for resistance. The preferred choices for Western countries are 14-day concomitant therapy, 14-day bismuth quadruple therapy, and 14-day hybrid sequential-concomitant therapy. We also provide details regarding the successful use of fluoroquinolone-, rifabutin-, and furazolidone-containing therapies. Finally, we provide recommendations for the efficient development (ie, identification and optimization) of new regimens, as well as how to prevent or minimize failures. The trial-and-error approach for identifying and testing regimens frequently resulted in poor treatment success. The described approach allows outcome to be predicted and should simplify treatment and drug development.
Collapse
|
13
|
Effects of linaclotide in patients with irritable bowel syndrome with constipation or chronic constipation: a meta-analysis. Clin Gastroenterol Hepatol 2013; 11:1084-1092.e3; quiz e68. [PMID: 23644388 DOI: 10.1016/j.cgh.2013.04.032] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2013] [Revised: 04/08/2013] [Accepted: 04/10/2013] [Indexed: 02/07/2023]
Abstract
BACKGROUND & AIMS Linaclotide is a minimally absorbed, 14-amino acid peptide used to treat patients with irritable bowel syndrome with constipation (IBS-C) or chronic constipation (CC). We performed a meta-analysis to determine the efficacy of linaclotide, compared with placebo, for patients with IBS-C or CC. METHODS MEDLINE, EMBASE, and the Cochrane central register of controlled trials were searched for randomized, placebo-controlled trials examining the effect of linaclotide in adults with IBS-C or CC. Dichotomous results were pooled to yield a relative risk (RR), 95% confidence intervals (CIs), and number needed to treat (NNT). RESULTS The search identified 7 trials of linaclotide in patients with IBS-C or CC; 6 were included in the analysis. Two of 3 trials of IBS-C used the end point recommended by the U.S. Food and Drug Administration: an increase from baseline of 1 or more complete spontaneous bowel movement (CSBM)/week and a 30% or more reduction from baseline in the weekly average of daily worst abdominal pain scores for 50% of the treatment weeks. On the basis of this end point, the RR for response to treatment with 290 μg linaclotide, compared with placebo, was 1.95 (95% CI, 1.3-2.9), and the NNT was 7 (95% CI, 5-11). For CC, on the basis of data from 3 trials of patients with CC, the RR for the primary end point (more than 3 CSBMs/week and an increase in 1 or more CSBM/week, for 75% of weeks) was 4.26 for 290 μg linaclotide vs placebo (95% CI, 2.80-6.47), and the NNT was 7 (95% CI, 5-8). Linaclotide also improved stool form and reduced abdominal pain, bloating, and overall symptom severity in patients with IBS-C or CC. CONCLUSIONS On the basis of a meta-analysis, linaclotide improves bowel function and reduces abdominal pain and overall severity of IBS-C or CC, compared with placebo.
Collapse
|
14
|
Pragmatic randomised controlled trials in parenting research: the issue of intention to treat. J Epidemiol Community Health 2006; 60:858-64. [PMID: 16973532 PMCID: PMC2566053 DOI: 10.1136/jech.2005.044214] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/25/2006] [Indexed: 11/04/2022]
Abstract
STUDY OBJECTIVE To evaluate trials of parenting programmes, regarding their use of intention to treat (ITT). DESIGN Individual trials included in two relevant Cochrane systematic reviews were scrutinised by two independent reviewers. Data on country of origin, target audience, trial type, treatment violations, use of ITT, and the management of missing data were extracted. MAIN RESULTS Thirty trial reports were reviewed. Three reported the use of an ITT approach to data analysis. Nineteen reported losing subjects to follow up although the implications of this were rarely considered. Insufficient detail in reports meant it was difficult to identify study drop outs, the nature of treatment violations, and those failing to provide outcome assessments. In two trials, study drop outs were considered as additional control groups, violating the basic principle of ITT. CONCLUSIONS It is recommended that future trial reports adhere to CONSORT guidelines. In particular ITT should be used for the main analyses, with strategies for managing treatment violations and handling missing data being reported a priori. Those conducting trials need to acknowledge the social nature of these programmes can sometimes result in erratic parent attendance and participation, which would only increase the chances of missing data. The use of approaches that can limit the proportion of missing data is therefore recommended.
Collapse
|