This is a common and challenging question. Most often, research is done with a specific population of patients and the actual intervention may or may not be well described, especially when we are referring to complex or behavioural interventions. The process of critical appraisal helps us to look more closely at the way the original study was designed and carried out. While we are looking for potential sources of bias, we are also looking closely at the population of participants and at the actual intervention.
So let’s assume that we know a behavioural intervention works; we have a clear description of an intervention (a sexual risk reduction programme for adolescents titled Focus on Kids), and we know it was implemented for a specific population group (African-American youth living in urban public housing in Baltimore) by appropriately qualified personnel, according to the protocol. How then do we know whether in a different set of patients, at a different style of clinic, with a different professional, that we can make small modifications and still expect the intervention to work? It is impossible to carry out high quality studies across all of these situations, and we need a pragmatic middle ground. At the same time, there is evidence that modifications can both positively and negatively impact expected outcomes.
In an ideal situation, it would be great to know exactly what are the core components of every intervention, so we could be sure not to modify or adapt these components. But when these components are behavioural interventions, it is more likely that they will change depending upon individuals and contexts.
An alternative approach has been described in a paper recently published in Implementation Science. Shannon Wiltsey Stirman and her American colleagues used a novel but transparent systematic review to develop a coding scheme for modifications for evidence-based behavioural interventions. They developed a robust coding system to understand how interventions have been modified during their implementation, both planned and unplanned, and in relation to the intervention’s content, context and the supporting training and educational materials. The authors argue that by understanding how common interventions are regularly modified, we can determine which modifications are acceptable to achieve the expected outcomes, and which are not. They also recommend this coding system be used alongside a more traditional evaluation of the fidelity of delivering the intervention. Fidelity of implementation is described as the degree to which an intervention is delivered according to its intended plan, which is usually described in a protocol. Together, they suggest that this dual analysis may help in understanding the core components of the intervention.