Batura N, Pulkki-Brännström AM, Agrawal P, Bagra A, Haghparast-Bidgoli H, Bozzani F, Colbourn T, Greco G, Hossain T, Sinha R, Thapa B, Skordis-Worrall J. Collecting and analysing cost data for complex public health trials: reflections on practice.
Glob Health Action 2014;
7:23257. [PMID:
24565214 PMCID:
PMC3929994 DOI:
10.3402/gha.v7.23257]
[Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2013] [Revised: 01/14/2014] [Accepted: 01/20/2014] [Indexed: 11/28/2022] Open
Abstract
Background
Current guidelines for the conduct of cost-effectiveness analysis (CEA) are mainly applicable to facility-based interventions in high-income settings. Differences in the unit of analysis and the high cost of data collection can make these guidelines challenging to follow within public health trials in low- and middle- income settings.
Objective
This paper reflects on the challenges experienced within our own work and proposes solutions that may be useful to others attempting to collect, analyse, and compare cost data between public health research sites in low- and middle-income countries.
Design
We describe the generally accepted methods (norms) for collecting and analysing cost data in a single-site trial from the provider perspective. We then describe our own experience applying these methods within eight comparable cluster randomised, controlled, trials. We describe the strategies used to maximise adherence to the norm, highlight ways in which we deviated from the norm, and reflect on the learning and limitations that resulted.
Results
When the expenses incurred by a number of small research sites are used to estimate the cost-effectiveness of delivering an intervention on a national scale, then deciding which expenses constitute ‘start-up’ costs will be a nontrivial decision that may differ among sites. Similarly, the decision to include or exclude research or monitoring and evaluation costs can have a significant impact on the findings. We separated out research costs and argued that monitoring and evaluation costs should be reported as part of the total trial cost. The human resource constraints that we experienced are also likely to be common to other trials. As we did not have an economist in each site, we collaborated with key personnel at each site who were trained to use a standardised cost collection tool. This approach both accommodated our resource constraints and served as a knowledge sharing and capacity building process within the research teams.
Conclusions
Given the practical reality of conducting randomised, controlled trials of public health interventions in low- and middle- income countries, it is not always possible to adhere to prescribed guidelines for the analysis of cost effectiveness. Compromises are frequently required as researchers seek a pragmatic balance between rigor and feasibility. There is no single solution to this tension but researchers are encouraged to be mindful of the limitations that accompany compromise, whilst being reassured that meaningful analyses can still be conducted with the resulting data.
Collapse