A key challenge in using research in practice is knowing how to interpret and translate results. Even when we have clear results from a well designed randomised controlled trial (RCT), there is a challenge in understanding and interpreting the results. Often we look for an effect size that summarises the average benefit across all different combinations of the intervention. There is usually a key intervention that has been randomly allocated across 2 similar groups, such that the benefit (indicated by the effect size) is attributable to the intervention. Most often, it is assumed that the intervention will work to a similar effect size, in other populations that are largely similar. To truly test the intervention, a range of RCTs are conducted in different settings, and later, a systematic review and meta-analysis of these effect sizes demonstrates significant benefit. This can then be more broadly translated across different settings.
The challenges of internal validity are simple – the effect size is a summary of the average benefit across all individuals in the population. When there is a large variation in either confidence intervals or standard deviation around the final outcome measurement, there is also a large variability in the population, so there are less individuals who actually have average scores!
The external validity challenges are more complex to understand. If an intervention works some where (for the individuals in the study that reported benefits), can we also say that the intervention works, in general? Can we then also be confident that it will work in a specific setting in which we want to create the reported benefit?
Often the intervention can be difficult to isolate, and there may be hidden factors or a series of steps that support or limit its use. A convincing philosophy paper argues that we need information about at least 3 areas;
- factors that supported the intervention in its original setting
- confirmation that these factors are in the new setting
- expectations that these factors will have the same effect in the new setting as they did in the original.
This process is often used to recommend common drugs. Once the mechanism of action is known for a drug, it can be expected to work in similar patients, but not all people, or people with a different underlying condition. For example, daily aspirin protects patients who have stable angina and atrial fibrillation, against future vascular events, but it is not useful for those of us who do not have these risk factors of cardiac disease.
However, when complex interventions are used within an RCT, the intervention is more difficult to describe and to potentially transfer to other settings. As an example, 20 Dutch primary schools were randomized to receive an intervention program that targeted preventing physical injury. While the intervention is comprehensive and well described behind a paywall, interesting patterns were found in the data. Children who were less active overall reported a larger reduction in injuries following this programme. This effect is likely to have been influenced by factors beyond the intervention, such as parents’ attitudes, home environments, children’s motivation or peer influences. It is difficult to understand how the intervention worked; therefore it is more difficult to transfer it to another setting because we do not know who the intervention was most useful for, and why.
The challenge in tailoring a programme to a new environment, is not to interfere with the original causal factors. But if we do not know what the original causal factors are, and for whom they worked best, it is difficult to sustain or replicate any intervention.