Can context explain heterogeneity in complex interventions?

Complex interventions are usually defined as containing several interacting components. These components usually include people (researchers), who are  trying to influence other people (healthcare professionals) to do, or not do something (adhere to clinical guideline recommendations). Some common complex interventions that have been shown to be effective in changing clinical practice include providing educational materials, attending educational meetings, and face to face educational outreach visits. While all these interventions have shown modest but important effects on clinical practice, it is often difficult to understand the actions of the healthcare professionals being influenced to change.

On the one hand we know that there can be wide variation in how the intervention is designed and implemented; printed brochures, posters and journal articles may be read, discussed with colleagues or distributed by senior and respected staff. In fact we know that journal articles alone do not always describe these complex interventions fully. Ideally we would want to know what the intervention is, who can use it and what training they need, where it should be used, for how long and why.

On the other hand, it is often the way in which the people act that either enhances or limits the effect of an intervention; brochures may be ignored, colleagues may criticise journal articles, but managers may decide to reinforce or enforce certain behaviours. The way people act will inevitably, be influenced by their environment. The term context is widely used to describe the environment, or situation surrounding an event, and these circumstances may help in understanding the event. There are often subtle differences in the environments in which healthcare professionals work. Even when people do what is asked of them, they will adapt their behaviour based on positive or negative feedback they receive. These simple adaptations, which may be a consequence of their environment can ultimately influence the way interventions are adopted.

If many non-drug healthcare interventions are complex, and depend on people to do something different than their usual everyday habits, then could it be that the way people behave determines how well they implement the intervention? If this behaviour depends on their context, could it be the context that actually limits or enhances the measured effect?

If context can explain some of the variation in actual practice, could this clinical diversity explain a large part of the statistical heterogeneity that is commonly reported but unexplained in the usual forms of sensitivity and subgroup analyses in high quality systematic reviews of complex interventions?

There is an inherent irony here, in that clinical trials are usually designed to exclude the potential confounding factors of context. We recognise that randomisation is designed to do this within a particular study. But it seems that when many studies of the same intervention are synthesised, the different contexts in which people work influences their behaviour in various ways, which together limit the expected positive effect.

There is a burgeoning interest in defining and evaluating context in healthcare research. Sadly, it seems that the inherent complexity of context can be best understood in retrospect. But perhaps there is a middle ground, to look for aspects of context that have been reported by  study authors to have contributed to the intervention’s effectiveness. Perhaps it might be most easily found in those (unpublished) articles where the intervention did not achieve the effect desired!!

Tagged with: , , , , , , ,
Posted in complex interventions, implementation

Leaders vs Managers when implementing improvement?

What if I answer my question in the first sentence, saying both are important – will you read on to find out more?

It is common to summarise differences between leadership and management;

What is less clear are the ways in which leaders and managers can implement change by working together. Here are some ideas that have been prompted by reviewing recent research papers.

  • It is ideal when leaders and managers set the goal and purpose for a change together; then the managers can write the plans and timeline while leaders focus on facilitating and engaging individuals and groups of staff to identify personal benefits   [change needs to be well organised but individuals need to see the benefit of their participation for it to be sustainable]
  • Often a new way of working has to be established;most people will look to managers to set the new standards, but leaders will often prepare and motivate individuals to try out and adhere to the new ways of working
  • When managers are aware of and can participate in a change (that will affect them), they are more inclined to ensure sufficient resources and staffing; however it will usually require leaders to emphasise what is important and therefore to establish a cultural change to support the new habits
  • While leaders are often the ones to support and supervise staff, it is managers who support training and set up systems to ensure this is maintained
  • For any change to be sustained, there needs to be regular monitoring and feedback to ensure slight modifications over time; often leaders can identify the areas that are most sensitive to change, from listening to and understanding their staff, but it is managers who can implement efficient  systems to monitor quality, accuracy and consistency as part of everyday practice; further it is positive feedback from new habits that will reinforce and continue behavioural change

Hope these are some useful and practical ideas to play with – feedback is welcome…

Tagged with: , , , , , , , , , , ,
Posted in change, implementation, improvement, leadership, management

Using clinical guidelines critically to define and measure best practice

Busy clinicians don’t have time to search for and appraise research articles; however many do use clinical guidelines to inform their practice. At the same time, we know that clinical guidelines do not always synthesise the highest quality research evidence into applicable clinical practice recommendations. Clinical guidelines are being produced by many governments, professional and national groups without full transparency about the quality of included research and the strength of recommendation. Therefore, it is often difficult for clinicians to tell if specific recommendations are applicable, transferable and important for their own patients.

However, a group of researchers at Monash University in Melbourne have demonstrated a pragmatic plan for using clinical guidelines, within knowledge translation research, to set standards for best practice. They describe 6 steps, which I have also seen applied in a different context;

  1. identify all relevant guidelines for the clinical area of interest; identify those of highest quality using the AGREE instrument; extract all clinical recommendations
  2. identify key clinical management areas and identify specific recommendations that have a strong recommendation using the GRADE methodology
  3. update the underlying evidence if required; you may need to do a systematic search
  4. discuss and evaluate the current evidence (including clinical guideline recommendations) with recognised experts to agree on evidence statements
  5. discuss the relevance of this evidence with local stakeholders to determine what recommendations are applicable, transferable and important to the local patient population; in my experience this is a crucial stage which requires facilitated face-to-face meetings with all relevant clinical specialists, and a transparent process of consensus development
  6. redefine local best practice recommendations that are realistic and achievable within existing structures and staff; when all key staff are engaged in the previous stage, they are more committed and engaged in defining practices that they understand, agree with and can adhere to

In my experience, the local best practice recommendations were defined and written into the electronic medical record. This made compliance more straightforward and enabled monitoring of adherence and variance – it was important to recognise that these recommendations were not minimum standards and that there would always be some patients that required less or more intervention to meet their unique set of clinical needs. I would also recommend, as did the authors in this study, that pilot work is required to determine if the measures identified in the best practice recommendations are appropriate for the local setting in terms of their sensitivity to change, feasibility, validity and reliability.

In summary, this could be a pragmatic way for clinicians to update their clinical practice to be truly evidence-based, with some academic facilitation and expertise. I look forward to others’ experiences of using clinical guidelines to inform their clinical practice.

Tagged with: , , , , , , , ,
Posted in best practice, clinical guidelines, knowledge translation

Engaging with patients across cultures?

What is important about working with patients? Following discussion in a culturally diverse learning group today, I have summarised the collective perspective. From working with patients with cancer in low and middle income countries for more than 200 cumulative years, it was suggested that clinicians learned about;

  • cultural sensitivity, developed from living in and engaging with the specific culture
  • appreciating patient’s awareness and acceptance of their disease process
  • respecting the family’s awareness and acceptance of the condition
  • recognising the influence of other cultural, religious and media-driven expectations of specific diseases

I can only reflect that these insights are likely to be much more broadly applicable…

Tagged with: , , ,
Posted in culture, patients

Do theories make knowledge more understandable?

When learning about a new or complex area, it is often difficult to know where to start to make sense of different observations, to identify which components are important, and how to link similar things together.
Knowledge itself is often complex and multi-faceted; consisting of information we read, hear, observe or experience. Reading about or listening to something new may encourage us to look for more information to confirm or disconfirm what we have read or heard. Many of us like testing out new ideas with respected peers. Perhaps this is why listening to and engaging in debate is so popular. However debates can be critical, informed and reflective, and at other times there may be hidden biases of assumptions, assertions and political aspirations!
Theories and theoretical frameworks can be useful to deepen and develop knowledge. Most commonly, they are defined as coherent sets of statements, concepts or ideas that organise, predict and explain phenomena.  Often, they guide us to think about complex topics in new ways; as they highlight what their originators consider to be key aspects of a topic or situation. They can help us to focus more quickly to understand and integrate new knowledge. Sometimes there is an ‘a-ha’ moment, where a theory explains things in a way that you have not thought of, but which makes sense!

So can theories help us to learn for the future?
Can we use theories to shape our thinking so that we don’t need to rethink new problems from basic principles?  If we take time to investigate theories around a new situation, will they help us to integrate this new knowledge, so that when faced with a similar dilemma in the future, we will have learned to understand and manage the situation more efficiently and effectively?

Tagged with: , , ,
Posted in behaviour, change, improvement, information, theories, thinking

Disruptive thinking – how does it help?

Most of us do what we have always done. In fact, craziness has been defined as expecting something different when you do as you always have done! So how do we change or improve our practice?

Often, it starts with a disruption in thinking; some new knowledge, a different way of looking at or analysing what is around us. However, this is often accompanied with feelings of frustration – it doesn’t make sense, feel comfortable, fit with my current experience… I don’t have the time or energy to go there!

Perhaps this is what happens to most good ideas?

Sometimes, however, a new way of thinking catches, tantalises us, is reinforced by discussion with others, is supported by further reading or investigation. We start down the path of what if… We look for more evidence to support or refute our perspective. Most of us are social learners, and we look to our opinion leaders and respected colleagues. Many of us want to debate more before we really accept or understand.

and then, just occasionally new ideas take hold and change our ways of thinking for ever. We assume a new way of doing things and forget that we ever doubted it. Think about your initial reactions to using a smart phone, a satellite navigation system, or truly mastering a new skill. Many new behaviours are preceded by a period of doubt and uncertainty.

and so the challenge for those of us in education is to present the early steps of disruptive thinking as a normal part of the educational process; provide opportunities to engage in debate, accept the frustration, and reinforce the new thinking. Usually it is hard to remember how our thinking has changed!

Tagged with: ,
Posted in behaviour, change, knowledge, thinking

Interpreting research evidence, from there to here

A key challenge in using research in practice is knowing how to interpret and translate results. Even when we have clear results from a well designed randomised controlled trial (RCT), there is a challenge in understanding and interpreting the results. Often we look for an effect size that summarises the average benefit across all different combinations of the intervention. There is usually a key intervention that has been randomly allocated across 2 similar groups, such that the benefit (indicated by the effect size) is attributable to the intervention. Most often, it is assumed that the intervention will work to a similar effect size, in other populations that are largely similar. To truly test the intervention, a range of RCTs are conducted in different settings, and later, a systematic review and meta-analysis of these effect sizes demonstrates significant benefit. This can then be more broadly translated across different settings.

The challenges of internal validity are simple – the effect size is a summary of the average benefit across all individuals in the population. When there is a large variation in either confidence intervals or standard deviation around the final outcome measurement, there is also a large variability in the population, so there are less individuals who actually have average scores!

The external validity challenges are more complex to understand. If an intervention works some where (for the individuals in the study that reported benefits), can we also say that the intervention works, in general? Can we then also be confident that it will work in a specific setting in which we want to create the reported benefit?

Often the intervention can be difficult to isolate, and there may be hidden factors or a series of steps that support or limit its use.  A convincing philosophy paper argues that we need information about at least 3 areas;

  • factors that supported the intervention in its original setting
  • confirmation that these factors are in the new setting
  • expectations that these factors will have the same effect in the new setting as they did in the original.

This process is often used to recommend common drugs. Once the mechanism of action is known for a drug, it can be expected to work in similar patients, but not all people, or people with a different underlying condition. For example, daily aspirin protects patients who have stable angina and atrial fibrillation, against future vascular events, but it is not useful for those of us who do not have these risk factors of cardiac disease.

However, when complex interventions are used within an RCT, the intervention is more difficult to describe and to potentially transfer to other settings. As an example, 20 Dutch primary schools were randomized to receive an intervention program that targeted preventing physical injury. While the intervention is comprehensive and well described behind a paywall, interesting patterns were found in the data. Children who were less active overall reported a larger reduction in injuries following this programme. This effect is likely to have been influenced by factors beyond the intervention, such as parents’ attitudes, home environments, children’s motivation or peer influences. It is difficult to understand how the intervention worked; therefore it is more difficult to transfer it to another setting because we do not know who the intervention was most useful for, and why.

The challenge in tailoring a programme to a new environment, is not to interfere with the original causal factors. But if we do not know what the original causal factors are, and for whom they worked best, it is difficult to sustain or replicate any intervention.





Tagged with: , ,
Posted in change, complex interventions, context, implementation, intervention, sustainability

Why is context important in healthcare improvement?

Context is the new keyword to describe the conditions present in the environment when an intervention is carried out to improve healthcare. It is usually excluded from controlled experimental studies, so that the treatment effect can be attributed to the intervention. However, while it might be easy to understand the pathophysiology of certain drugs that are shown to be more effective than others, a good RCT can only tell you that a treatment is effective, not why, or in what conditions it is more effective.

As an example, in a drug trial, we assume that people take the tablets (which specify the dose) when required; and mostly this does happen because it is easy to do, and many people connect the drug taking with their health improvement. However, when something like a healthy diet and regular exercise are substituted for a drug, we recognise that these recommendations are more complex and they require additional adjustments in daily life. The daily life adjustments are what we call the context for this intervention. So while there is research evidence to support regular exercise for improving physical fitness and quality of life for adults with chronic kidney disease, and a summary recommendation for 4-6 months of supervised sessions held 3 times a week, using high intensity mixed cardiovascular and resistance training for 30 to 90 minutes, we still don’t know what aspects of individuals’ personal and family lives and their physical environment makes this more achievable.

When we look at research about these complex interventions, that include one or more components, it is important to identify which conditions in the environment surrounding the delivery of that intervention are more inclined to support its success. Evidence is emerging about which aspects of context are critical for some interventions that are designed to improve quality of care.  However, there are often several strategies that can achieve this intervention improvement. Some interventions are more sensitive to context than others, and these might be different at different stages and levels of implementation. There is also little guidance about which aspects of context will facilitate or hinder the intervention. It seems that there is both an art and science to identifying the important contextual aspects to support change.

To fully understand context, many business consultants, managers and academics will recommend a detailed organisational analysis; perhaps a stakeholder  or root cause analysis. There are a range of tools and strategies to help understand who and what are the most important aspects of the context, in relation to the planned intervention. Then it is important to look for empirical research evidence about the role of context in supporting clinical interventions, to identify what conditions support or hinder an intervention. Most often, we can find some evidence about barriers in relation to knowledge, attitudes or behaviour.

However, it is often difficult to determine the actual intervention from a published research paper, and it is very uncommon to find suggestions for how to implement it in different environments. There is a need for those of us interested in this area to document both the actual success and replicability of the intervention, through a detailed description of the context that supported and hindered its implementation. Then a systematic comparison between what worked (and didn’t work) in different environments may lead to the identification of contextual factors that are necessary or sufficient for that particular intervention, together with some key facilitators and barriers.

Tagged with: , , , , , ,
Posted in change, complex interventions, context, implementation, improvement, intervention, organisation

What skills do you need to improve?

Many of us have accepted challenges to improve a service in which we have worked. Sometimes we were successful. But often when were not, we just moved on to a different place and project. Sometimes lessons have been learned, but often we repeat the same mistake or optimise similar lucky breaks! The burgeoning science of how to do this in healthcare is variously called implementation or improvement science.

A recent research report summarised the active evaluation of 4 improvement projects which were set up as learning communities, across 2 different sites in the UK. The authors summarised that key individuals were required to be trained in 3 sets of skills that they considered essential for successful implementation of improvement projects;

  1.  Technical skills – critical appraisal of research, project and process analysis and management
  2. Soft skills – leadership, communication, negotiation
  3. Learning skills – individual and group learning, sharing knowledge, critical reflection

They used an analogy of a pyramid, where each side represented these sets of skills, and proposed that to sustain improvement (the apex of the pyramid), the organisational base (environment, culture, structures and resources) should be broad and solid enough to support the construction of the three ‘walls’ (‘technical’, ‘soft’ and ‘learning’ skills) to the maximum height.

In addition, they emphasised key strategies that discriminated the successful projects, such as;

  • Aligning the improvement project with external and internal environments
  • Recognising and shaping the internal organisational culture for improvement and learning
  • Developing appropriate structures and processes to sustain organisational cultures
  • Developing the knowledge and expertise of leaders to manage the improvement projects
  • Using project and process management principles to manage tasks
  • Promoting opportunities for new and regular communication to build trust
  • Supporting learning and knowledge sharing

This all sounds very promising, but is there an art to developing and using these skills to implement the key strategies to actually achieve the desired improvement??


Tagged with: , ,
Posted in change, implementation, improvement, leadership, learning, management, organisation

Is there evidence for leadership in healthcare?

The concept of leadership is considered important for clinical practice, but the research evidence for the benefits of leadership, particularly on patient care are sparse. There are some explanations for this…

The management literature is heavy with theoretical articles describing different leadership attributes and hypothesising how leadership is enacted in the workplace. In general, most theorists consider leadership as a reflexive attribute where individuals use their own experience, insights and self-knowledge to match their personal strengths to the demands of the work situation. In short, while there is agreement on basic principles of leadership and there is a long but not very discriminative list of leadership skills, the actual implementation of leadership will vary across individuals, and with respect to their organisational environment.

Therefore, traditional methods of experimental research cannot consistently isolate leadership behaviours, and it is difficult to ‘deliver’ leadership to one group of participants while restricting it from others. Leadership is both a formal and informal activity that is difficult to turn on or off. This is demonstrated in a Cochrane review of randomised controlled trials that were designed to investigate the effectiveness of local opinion leaders in delivering education to encourage health professionals to use more evidence in their practice.  The authors, who are all very experienced systematic reviewers, concluded that it was difficult to identify the actual activities of opinion leaders. They also said it was difficult to separate out the effects of being an opinion leader from other types of educational intervention. They concluded that an opinion leader was as successful as more traditional forms of education, such as distributing educational materials, carrying out audit and feedback, and using educational outreach activities.
Observational studies make comparisons between environments where there are different types of leadership, and certain outcomes. Often, it is difficult to separate out the effects of leadership from leadership styles and behaviours. A survey of social workers summarised that leaders who were charismatic, considered individual needs, inspired and intellectually stimulated their staff were perceived as more effective and their staff were more satisfied and prepared to put in extra effort. However, while these results support a well recognised model of transformational leadership, they are likely to be biased and causation can not be inferred.

This pattern of identifying patterns and impacts of leadership can also be seen when observational studies are synthesised together. When the relationship between nursing leadership and patient outcomes was investigated, there was a trend for  transformational nursing leaders to improve patient satisfaction and reduce adverse events and complications. It was suggested that leaders manage the context, staffing and financial resources. They create appropriate staffing levels for the work being done. They retain and support experienced staff, by working in teams, maintaining job satisfaction and keeping staffing levels stable.While these suggestions fit with the theory of transformational leadership and make logical sense, it is difficult to prove these mechanisms of action, using randomised controlled trials.

So we are left being able to identify key leadership styles and skills, but unable to ‘prove’ why and how this is so. Perhaps we need to recognise patterns and emphasise the art of leadership,  when successful individuals have read the organisational context accurately, and used their own skills to achieve outcomes of benefit to those who are following…



Tagged with: , , , , ,
Posted in clinical outcomes, clinical practice, evidence, healthcare professional, leadership, management