How does Knowledge translation relate to evidence-based practice?

The most common definition of Evidence-Based Practice (EBP) describes the explicit use of current best evidence in making decisions about individual patient care. It means integrating research evidence, with clinicians’ expertise and patient values.
Over the last 20 years, the emphasis of the EBP community has been on quantifying the quality of research evidence, often at the expense of integrating the knowledge and experience of clinicians and in understanding patient values.

At the same time, there has been an explosion in the production of research, such that it is now estimated that up to 85% of research is wasted because of flawed research designs, biased publication and poor quality reporting. Further, we know that on average, it takes 17 years for research evidence to reach clinical practice.

Therefore, the aspiration for clinicians to continually update their practice to be evidence-based is unrealistic if we rely on the passive dissemination of research evidence. Further continuing professional education about using and understanding the research evidence is not sufficient to empower clinicians to change their clinical practices.Clinicians have also realised that as individuals they are not able to comprehensively review the research evidence and lead the recommended changes in clinical practice. To fully utilise research evidence in clinical practice, there is a need to better understand the complex systems and in which clinicians work. Different theories and practices have been developed to bridge what has been commonly term, the knowledge- practice gap; the gap between what the research evidence should happen, and what really is carried in within our healthcare services

In 2005, the Canadian Institute of Health Research has describe knowledge translation (KT) as the exchange and synthesis of knowledge within complex systems of healthcare interactions, to accelerate the benefits of research for individuals and health care systems. The principles and strategies of knowledge translation can help clinicians incorporate key EBP principles in the synthesis of high quality research evidence as a key form of knowledge, together with local knowledge about current practice, clinician expertise and patient needs. Commonly clinicians access information across a broad range of sources to influence their practice

KT strategies encourage clinicians to focus on key clinical outcomes, and to identify specific aspects of behaviour change. Any change within a complex system requires the identification of key stakeholders and an analysis of local barriers and facilitators.At the same time, it is important to look for evidence of strategies to support effective individual and organisational change. To complement any implementation, it is vital to incorporate strategies to monitor and sustain positive change.

Advertisements
Posted in barriers, behaviour, clinical improvement, clinician expertise, evidence-based practice, facilitators, health care services, knowledge translation, Uncategorized

How research can improve patient care?

Life-changing research can only benefit patients if they receive an appropriate intervention (as part of normal clinical care)  implemented in the way it was designed.

We need to reduce the time lag, currently estimated at 17 years, between research discovery and health intervention and we need to ensure that research is implemented as planned to achieve maximal benefits.

To achieve this, we need to be able to recognise and enhance the trajectory from research discovery to patient impact across multiple research and clinical teams.

It seems important, then, that these research and clinical teams have opportunities to collaborate at different time frames to ensure that the research intervention can be clinically implemented and sustained within normal practice. Research that is designed in isolation from clinical practice may never be able to be feasibly used. While it is difficult to be able to predict outcomes from early discovery research, could researchers be facilitated, perhaps in discussion with clinicians, to identify potential trajectories towards clinical impact?

Further, can clinical teams be primed to be able to receive and implement positive research results? What if clinical teams knew enough to be able to use research results to improve patient care? A recent systematic review found evidence that a positive research culture in clinical environments generates benefits for the patients, staff and the organisation.  Lower patient mortality and higher levels of patient satisfaction were reported. Clinical staff were more satisfied and less likely to leave their organisation. There were also examples given of increased numbers of patients being treated with reduced lengths of inpatient stay! At this stage, a positive research culture is only associated with, not predictive of these benefits. But there is a need for more research to investigate further!

So what if we encouraged collaboration between academic and clinicians in the design of clinically meaningful and relevant research and supported the clinical workforce to be able to understand and use research? Could a renewed collaboration between the relevant researchers and clinicians ensure the faithful implementation and sustainability of the research benefits for patients?  A dream for nirvana, perhaps…

 

 

 

Tagged with: , , ,
Posted in collaboration, culture, health care services, implementation, improvement, organisation, patients, research culture, sustainability, using research

Building research capacity: to translate knowledge from research to clinical environments?

I first wrote about capacity building activities in primary health care back in 2003; as a strategy to facilitate clinicians to use, participate in and lead research. We highlighted the cultural divide between the almost independent industries of research production and health care service delivery. We emphasised that the people delivering health care services are not often involved in asking or answering questions that are relevant to their practice. Researchers and clinicians work to different time frames, expectations, and rewards.

More recently, the diverse professions of allied health have investigated using research capacity building initiatives to boost the research skills and contributions of busy clinicians. There has been a focus on providing opportunities for clinicians to learn about research methodologies and skills; building networks between hospitals and universities and developing collaborations around research projects. Sadly, the most commonly reported outcomes are academic outputs such as publications, successful grant application and postgraduate qualifications. However, there is a growing recognition that research capacity building experiences are beneficial for clinician’s job satisfaction, and there is a desire to track how these initiatives are facilitating the use of research in clinical care.

So if we revisit the overall goal of using research in practice, we need to be reminded of the necessary but insufficient role of research in guiding clinical practice.While the principles of evidence-base medicine are useful in helping clinicians to learn how to search for and identify useful research, and appraise it for application; clinicians need additional resources to adapt the research evidence for their local context and implement it with a view to sustainable improvement for patients.

The role of health care organisations’ structures and processes are crucial. It has been suggested that organisations that support and encourage innovation, data collection and analysis, and critical appraisal and thinking skills, will be more likely to use and apply research evidence. It seems that there needs to be purposeful influence of health care managers and administrators about the benefits of, strategies for and skills required to use research to improve clinical practice. A whole range of social and knowledge interactions are required to influence processes and structures that will facilitate the use of research evidence in clinical practice.

Therefore, I can confirm, in a different set of arguments, my recommendation from 2003 that we need to think about expanding research capacity building initiatives to include organisational development work to encourage structural and process changes in healthcare organisations that support the need (and benefit to patients) of using research to improve clinical practice.

Tagged with: , , ,
Posted in allied health clinicians, clinical improvement, collaboration, networks, organisational development, research capacity, research methods

Quality decision making – a dangerous idea to fix EBM?

I am writing this blog to complement a podcast I made at the recent EvidenceLive conference in Oxford.  We were asked to propose a dangerous idea in relation to the future of EBM, and then to suggest a solution. My idea is about recognising, and making explicit the quality of all components of clinical decision making in evidence-based health care.

The current ‘real vs rubbish EBM‘ debate encourages clinicians to consider the research evidence, in conjunction with patient wishes and the clinician’s expertise; just as Dave Sackett exhorted over 20 years ago, at the inception of evidence-based practice. More recently, there has been a recognition that all clinical decisions are made within a local context of people, organisations and resources.

So my explanation of this dilemma is about recognising and promoting the quality of 4 key components of clinical decision making (research evidence, patient wishes, clinician expertise and local context). But what is quality? It is most commonly described as a state of excellence, and there are numerous initiatives within healthcare to improve quality through the identification and measurement of key components.

Having identified 4 key components of evidence-based health care, the real challenge is how can we identify, measure and evaluate the quality of the knowledge from each component that contributes to real time clinical decision making?

  1. for the component of research evidence, we have made major advances over the last 20 years in terms of understanding the quality of study designs suitable for answering important clinical questions; we have developed critical appraisal and quality reporting tools and checklists to evaluate the quality of the research evidence at the core of a clinical decision; using the key steps of evidence-based practice, we have taught many clinicians explicit and effective ways to identify, search for and critically appraise research evidence to answer specific clinical queries
  2. with respect to patient wishes, there are tools developed to help patients understand risk and bias, so they can recognise that every intervention has potential benefits and harms; and the concept of shared decision making highlights the importance of informing patients about the comparative benefits and harms when the research evidence is uncertain; however it is difficult for busy clinicians to consistently identify and incorporate patient values in clinical decisions
  3. in the component of the clinician’s expertise, clinical experience has long been discounted as the lowest form of research evidence, yet it is clinicians who develop their practice through using a combination of their own tacit experience and  clinical reasoning skills; therefore if we recognise these locally relevant inductive explanations or mindlines, can we make them more transparent so that they can be tested and developed for other clinicians?
  4. with respect to the local context, we know from  the relatively new field of implementation science that certain patterns of resources and people can function as barriers or facilitators to using research evidence and other forms of knowledge to make clinical decisions; we also know that organisations which encourage innovation, data collection and analysis, and critical appraisal are more likely to use and apply research evidence; so it is important to identify and validate tools that can examine the capacity of individuals and health organisations to evaluate there local context as a basis for using research.

Therefore, it is time we need to move outside of our current paradigms to understand the contribution of knowledge of clinical research, patient values, clinicians’ expertise and the local context within clinical decision making. It is only then, when we understand how knowledge is truly co-construcred that we can promote high quality decision making with some consistency.

Tagged with: ,
Posted in clinician expertise, evidence-based practice, local context, mindlines, patient preference, patient values, research evidence, shared decision making

Learning experiences from translating knowledge

Here’s a quick summary from 2 wonderful presenters during last week’s module “Knowledge into Action

I suggest that these shared learnings might be a useful guide to check individual progress during projects that aim to use research evidence to improve clinical practice. They are not in any particular order, as we all recognise that this complex process is both circular and sometimes spiralling.

  1. start small then scale up
  2. being systematic is key to leading knowledge improvement projects – this will avoid people jumping to quick fixes
  3. use a sound theoretical model to guide decision making – this avoids random guesses about what might or might not work and may even help to guide planning about what is achievable
  4. review barriers and facilitators for change as a basis for determining what is possible or feasible
  5. use the process of  a clinical audit or root-cause analysis to understand what is going on – ask questions about how and why, to understand what is behind the patterns or trends
  6. develop an improvement/intervention plan and stick to it
  7. take time to emphasise expectations and benefits for all involved in the change – bottom up change is slow and requires buy-in from the whole team
  8. sometimes the research evidence is not enough to know what to do – there is a need to document systematically what works in practice
  9. accumulate learning experiences to inform future successes
  10. focus on evidence gaps that are important to the particular stakeholders
  11. one size does not fit all

thanks so much Brian Oldenburg and Amelia Samuels

Tagged with: , ,
Posted in change, clinical practice, complex interventions, implementation, knowledge translation

Logic Models: what, why, how?

A recent article emphasised the power of using theory to explain why improvement programmes work. When key components of an intervention can be identified and their theoretical contribution to final outcomes understood, causal associations can be identified and tested. Although it has been argued that complex interventions require complex solutions, psychological theories have an important function of explaining human behaviours that are often the focus of change. So how can we isolate and describe theories that may be used formally or informally within complex interventions?

Logic models are a useful tool. They are often described as process maps, theories of change, or logical frameworks. Similarly they are depicted in many different ways and formats.

At their core, they are simplified and graphical descriptions of what happens in an intervention, described in a particular sequence, with clear and practical benefits. It is important to represent the key actions or components that need to take place to drive the change that will generate the expected outcomes.

They usually describe the mechanisms of change, in terms of the reasons that all activities are thought to lead to the intended outcomes. In designing a logic model, it is also useful to ask IF (activities are implemented) THEN (what) behaviours or outcomes are expected? Psychological theory can be used to explain how the actual activity encourages the specific behaviours (and why this is likely to happen).

This is not just good practice, as there is evidence to suggest that multifaceted interventions often work better, when they incorporate interventions components that are known to be effective.

So when should we bother to develop a logic model of what we are doing in a complex intervention? Often they are seen as a useful tool to design evaluation schedules.

However, the process of developing a logic model can be very beneficial for intervention teams, to discuss and clarify hidden assumptions about how the intervention is expected to work. Most people will have mental models or ‘common sense’ explanations of how the active ingredients of an intervention work together to achieve the expected outcomes. However,  internal assumptions and explanations may not be aligned and therefore, different practices could be used without being theoretically compatible. Therefore, a logic model can guide the key outcomes to be assessed, the measurement tools, analytical approaches and standards to be used.

Further, when a logic model is developed early in a complex intervention, it can highlight the need for certain types of contextual analysis, through the identification of core components that are necessary for success. It may help in prioritising which barriers are most important to address. Logic models can also influence building efficient implementation plans that can also  highlight which key processes need to be monitored and evaluated. This may ensure that evaluation is designed into the intervention from the outset. Finally, it makes the evaluation of these programmes much more usable for future use; when you can affirm that a particular component influenced behaviour to achieved its expected success.  It can be even more exciting when Qualitative interviews can confirm the theoretical explanation of why this happened!

 

Tagged with: , ,
Posted in behaviour, change, complex interventions, context, organisation, theories

Stakeholders: key to understanding context?

How should you decide which aspects of context to analyse in order to apply research to improve clinical practice?

Can I suggest that stakeholders who have high interest, influence and power are best able to identify the key aspects of context that are likely to be influenced by any proposed improvement or intervention.

Why do I say this?

We know that the local context is important to the way research can be used to improve clinical practice. There is an understanding, inspired by the Knowledge to Action framework, that context needs to analysed and understood in order to adapt and apply the research evidence. If the local context is sufficiently different from that where the original research was implemented, there is a need to tailor the intervention to the different local context.

So how best should you analyse context, as in the local and external environments, to make these comparisons? Several tools and frameworks have been developed to direct this analysis. The Health Foundation (UK) have produced a summary document Perspectives on Context. The PARiHS (Promoting Action on Research Implementation in Health Services) framework proposes diagnostic evaluation of both context and the research evidence as a prerequisite for designing interventions for change. While there are some commonalities between these two sets of recommendations, there are also insights to be gained from business and project management, including analysis of stakeholders, force fields, root causes and many more.

There is a risk of needing to analyse every aspect of both external and internal environments. How can you be sufficiently broad and critical to identify which aspects of the local context are important for a specific issue where it has been noted that current practice is not applying the best quality research evidence? With careful engagement, those stakeholders with high level of power, influence and interest in the area often have the best understanding of their local context. However, the next challenge is to facilitate a shared critical review of their own context, so that they can highlight the issues and areas for detailed analysis.

Tagged with: ,
Posted in clinical practice, context, culture, evidence, health care services, improvement, leadership

Why test theory fidelity?

Recent guidance about developing and evaluating complex interventions emphasises the need to use theoretical explanations to aid understanding of what works within an intervention. This is in direct contrast to the large research literature about drug studies, where human physiology is used to explain why certain drugs do and do not work in specific people and clinical conditions. This works well, in most cases, because there is consistency of physiology across the human race.

However, in complex interventions (such as adopting a healthy lifestyle), the actual intervention is often composed of several components (dietary and exercise advice, group work, individual goal setting), and is delivered by certain people (GPs, physiotherapists, trainers, coaches) for patients in specific circumstances. The complexity is often multi-layered and acts across several systems; from the healthcare practitioners that deliver it, to patient communities, and to individuals and their families. There is often an element of behaviour change for the staff delivering the intervention, and for the patients receiving it.

This is often a point of confusion, because there are many psychological and sociological theories of behaviour change which can be used to justify or explain a complex intervention. Behavioural theories can act at an individual level (I will continue to exercise because I feel positive benefits), or a group level (I feel more motivated when I discuss healthy eating strategies with my friends). They can also refer to the social environment (It is rewarding seeing other people being healthy around me) or to the physical environment (The gym is easy for me to get to, on my own).

In the current literature, it seems that theories can be chosen to explain why an intervention does or does not work. Sometimes they are offered as an excuse or an isolated explanation.

  • Is this fair – especially if the same theory has not been used to design or implement the intervention?
  • Does this confuse or distort the responsible use of theory in research?
  • Should theory be used faithfully and responsibly in designing and implementing interventions before it can inform evaluation?

If we recognise the power of theories (self-efficacy , adult learning) to explain why certain behaviours (regular exercise, healthy eating) create specific outcomes (fitness, weight loss), then should we expect in high quality research, that the theory has been faithfully used at all stages of designing, implementing and evaluating and intervention? In the same way that poor understanding of basic biochemistry may distort results from drug studies, does a poorly used theory also confuse and distort explanations of complex interventions?

The concept of fidelity was recently introduced for implementing complex interventions. It was suggested that the way the intervention was delivered could influence intended outcomes. Specifically, five elements were identified that could be measured: adherence, exposure, quality, participant responsiveness and programme differentiation. Through measurement, it can be determined how faithfully the implementation was delivered. It was suggested that there could be more confidence in attributing outcomes to faithfully implemented complex interventions.

Using similar arguments,  we may be able to be more confident in using theories to explain and predict outcomes if they are consistently and transparently used to explain why and how complex interventions should be designed, delivered and evaluated.

The next challenge is how do we identify and measure the faithful use of theory in designing, delivering and evaluating complex interventions?

 

 

Tagged with: , , ,
Posted in behaviour, change, complex interventions, implementation, intervention, realist review, theory fidelity

new methodologies for EBM?

Trisha Greenhalgh opened the 1st Realist Evaluation and Synthesis conference in Liverpool last week with the question Should we align realist methodology and evidence-based medicine? She surveyed the audience to reveal a lot of uncertainty.

She referred to the RealEBM debate and her recent BMJ article which posed the question Evidence-Based Medicine: A movement in crisis to substantiate the crisis of confidence. She discussed the critical perceptions of EBM; that randomised controlled trials (RCTs) are the gold standard for evidence, yet they don’t help clinicians make individual decisions and they can be used for political and financial purposes.

I was not able to respond in person at the time; but have summarised my response…

Many experienced clinicians approach us at the Centre for Evidence-Based Medicine in Oxford to learn about ways to improve their understanding and  use of research evidence in clinical practice. As they learn about the 5 steps of evidence-based practice they quickly realise that, as they ask important clinical questions and search the many databases, they need to engage with research more broadly than just interpreting RCTs. They also realise that many clinical questions are in fact complex questions that cannot be answered by RCTs alone. A good RCT can only tell you about the effect of an intervention – it cannot tell you why the interventions was successful or whether it can be implemented in another environment. To answer these questions we need alternative methodologies. Qualitative, mixed and realist methodologies could potentially help to provide answers to the more challenging and important clinical questions.

Therefore it is vital for the  EBM community to continue to help clinicians answer the real questions posed by patients and the complexity of clinical practice. As clinicians have learned about EBM, they have answered some of the more straightforward clinical questions, leaving the complex ones unanswered! We still need to recognise the importance of asking a clear question; and use this to guide the best study design to search for. It is becoming obvious for many clinicians that there is a need to engage in alternative methodologies to the gold standard RCT. Therefore we need to build up a repertoire of different study designs to keep up to speed with the clinical questions that are being asked. To do this, we need to understand both the practical steps of different methodologies, and we need to critically evaluate their comparative quality and rigor.

It is in determining research rigor, that there are significant challenge across methodologies. The positivist epistemology of quantitative research requires critical appraisal of the risk of bias, through detailed checklists of internal and external validity. In contrast, constructivist qualitative researchers emphasise concepts such as credibility and trustworthiness. While there has been a long debate about the fundamental differences between qualitative and quantitative research, there is an increasing pragmatic awareness of the need to mix these methods to answer contemporary research questions. Later in her talk, Trisha indicated that Realism may be a philosophical mid point on a continuum between positivist and constructivist paradigms. Realists focus on making sense of social realities by developing explanatory theories to explain the impact of certain situations on the outcomes of research interventions. In fact, realist reviewers aim to answer the important question of what works, for whom, in what circumstances. Therefore, it seems that the realist paradigm and methodologies will be very important in helping clinicians to answer complex questions now, and into the future. The EBM community needs to engage!

 

 

Tagged with: , , ,
Posted in Evidence-Based Medicine, methodology, RealEBM, realist review

Different circumstances different outcomes

In my quest to better understand which aspects of the local environment influence the way complex interventions are delivered, I was alerted to a comprehensive realist review of human resource management interventions that were designed to improve health workers’ performance in low and middle income countries. In the 48 included studies, the most commonly evaluated interventions were of continuing education, either alone or with other strategies such as supervision, incentives, quality improvement and decentralisation.

The exciting difference about this review, is that it aimed to both identify important contextual factors AND to explain why specific interventions work in certain contexts but not others.

Firstly, combined interventions of participatory, interactive training, job aids and strengthening health systems  can improve health workers performance – however we don’t know by how much.

Secondly, important contextual factors include;

  • involving local authorities, communities and management to maintain adequate staffing and equipment
  • involving local stakeholders in adapting the intervention to the local situation
  • involving local staff to identify problems and implement local solutions

Thirdly health workers can improve their performance by;

  • enhancing their knowledge and skills
  • improving their working conditions, motivation, job satisfaction
  • being empowered and feeling obliged to implement change
  • financial incentives

While this is only a snapshot of certain research conducted in low and middle income countries, the results are encouraging and offer directions for future research and practice.

Tagged with: , ,
Posted in behaviour, complex interventions, culture, implementation, improvement