Reflections from the workshop: Towards comprehensive evaluation for health and development: promoting the integration of evaluation methods
A workshop funded by the British Council Researcher Links programme and the EC SDH-Net consortium
By Jo Borghi (LSHTM)
Economic evaluation is recognised to be a key component within evaluation assessing value for money and whether resources are used efficiently. While the methods for undertaking economic evaluation are well established with regards to clinical interventions, there is much less guidance on their application to complex interventions. The Medical Research Council (MRC) provide a useful framework for defining complex interventions, which are characterised by their numerous interacting components, the uncertainty about the timing of outcomes and deciding when to measure them, the lack of predictability of how interventions unfold in practice, due to local flexibility and oversight of the process. Interventions can often operate in complex systems, for example, in health systems, which adds another level of complexity to the conduct of economic evaluation, as outlined by Alan Shiell (2008).
A number of challenges that face economists doing evaluations of complex interventions emerged during the presentations today.
Measuring costs – where to draw the line?
The most renowned part of an economist’s job is the measurement of intervention costs. Not just what was paid for by the implementer (the financial costs), but the actual value of all resources employed in the delivery or receipt of the intervention (the economic or opportunity costs).
With complex interventions, the implementer and beneficiary response to interventions can be a critical factor determining their success. This (economic) input ideally should be valued, however, the process of doing so is not without challenges, as implementation experience is likely to vary across settings and time, so too are costs. The ‘true’ economic cost refers to the resources consumed during the intervention and this is largely defined by the perspective of the analysis. Accurate measurement will require a flexible approach to data collection and working out how to appropriately sample respondents to accurately reflect costs in different settings.
Further, complex interventions may interact with or affect the implementation and cost of other interventions (for example, performance based financing often results in changes being made to routine health information systems, with implications for the time spent compiling such data). Certain interventions may result in system level change such as the use of fee for service rather than capitation to purchase services. Should the economic impact of these changes be measured and reported – and if so how? And how should the costs of unintended consequences be dealt with?
Andrew Mirelman, a Research Fellow from the University of York, described a cost-effectiveness analysis of a performance based financing scheme in Argentina, Plan Nacer, conducted by the World Bank which only considered financial costs. Participants agreed this presents a very narrow picture of programme costs, and that efforts to measure economic as well as financial costs are critical. It is less clear where to draw the boundaries in terms of what is measured and valued. This decision will be best taken at the outset of the evaluation by those designing the study based on available resources and the objectives of the evaluation.
Outcome Measurement or Prediction?
A further challenge was that of measuring outcomes, when the timing of their occurrence is beyond the end of the evaluation period; or when resources are insufficient to measure them. In such cases, the evaluation team has to decide whether to stop at process outcomes which can be reliably measured but may be unsatisfactory in terms of the resulting cost-effectiveness ratio, or to try and predict or model outcomes, which requires assumptions to be made and may result in misleading or inaccurate estimates. It is disputed that complex health economic models sometimes imply a high level of scientific rigour which does not necessarily exist in reality.
Antony Martin, a Research Fellow from the University of Liverpool, illustrated this in relation to an economic evaluation of plerixafor compared to conventional chemotherapy for first line stem cell mobilisation. In this case the primary outcome was restricted to patients who achieved sufficient stem cell harvest to facilitate a liver transplant. This restricted the comparison to other interventions with the same narrowly defined objective and several limitations of these types of analyses were highlighted. It was recommended that where feasible, the analysis should be extended to identify how the outcomes of interventions translate into improvements in patient quantity and quality of life.
Multiplicity of Outcomes
Complex interventions can result in a multiplicity of outcomes, some positive intended outcomes as well as potential unintended consequences. Collapsing outcomes into a single measure such as QALYs or DALYs is not always possible even when dealing with exclusively positive health-related outcomes, not to mention the difficulty of combining positive and negative outcomes or outcomes from across sectors. Andrew Mirelman discussed whether a broader measure such as subjective wellbeing may provide a solution if we can relate it to outcome changes – he pointed us to paper by John Brazier on this topic (Mukura and Brazier, 2013).
The economic evaluation of Plan Nacer only considered outcomes on the treated or those exposed to the intervention (the effect of treatment on the treated, rather than the average treatment effect that is more frequently measured within an outcome evaluation). Andrew asks us if this is appropriate or whether we should be looking at differential effects within sensitivity analysis – the treatment on the treated representing the best possible level of effect, assuming everyone is treated.
When interpreting cost-effectiveness results, and deciding whether or not an investment represents value for money, decision makers sometimes make reference to a cost-effectiveness thresholds or the maximum amount they are willing to pay for effects on outcome. In the United Kingdom this process is formalised with an established threshold of around £20,000-30,000 per QALY. In 2008, the UK's National Institute for Health and Care Excellence (NICE) established NICE International to offer advice to countries and build capacity for assessing and interpreting evidence of effectiveness and cost-effectiveness to inform health policy. Eleanor Grieve, a Research Associate at the University of Glasgow talked about the iDSI (international decision support initiative), launched in 2013 by NICE International which provides support to low and middle income countries (LMIC) to make better informed decisions and help prioritise fund allocation. Presently countries with no established national threshold tend to consider interventions to be highly cost-effective if their cost falls below annual GDP per capita following recommendations from WHO-CHOICE. Eleanor is also tasked with developing a methodological approach for measuring the impact of Health Technology Assessment policy interventions in LMIC. This will be informed by the use of case studies, including assessing the cost-effectiveness of scaling up a voucher scheme for maternal and child health among poor women in Myanmar utilising existing economic models developed in the region – a novel application of these methods.
Way forward? Methods Innovation
While the session highlighted a number of challenges that economists face when doing economic evaluation of complex interventions, the last presentation shed light on a potential way forward in terms of methods innovation. Armando Vargas-Palacios, a Research Fellow at the University of Leeds, illustrated novel methods, which have their origin in operational research, to modelling outcomes overcoming some of the known limitations of decision tree models and Markov models. The approaches can reflect and model the dynamic nature of health care system – enabling the assessment of costs and outcomes at different levels of the system. The approach can also serve to predict resource requirements for programme implementation, as well as model likely outcomes ex-ante. Armando introduced us to three methods: discrete event simulation; agent-based modelling (both individual level models) and system dynamics which, in contrast, works at the aggregate level therefore demanding the population to have similar characteristics. The approach can be implemented using software that has a free version and a low cost license (Berkley Madonna; Simul8).
It is clear that to overcome the challenges faced by economists evaluating complex interventions, their perspective is needed at all stages of the evaluation process from design through to analysis. Economic evaluation needs to be designed prospectively and built into evaluation studies from the outset. Robust (economic) evaluation of complex interventions requires methodological innovation, and this can more successfully happen when researchers across disciplines come together to drive the subject forward.
Profile of panelists