It’s time to explode the assumption that the best and only evidence referred to in Evidence-Based Health Care (EBHC) is scientific research. OK, it is important to understand the hierarchies of research evidence, in order to detect potential sources of bias, and to be sufficiently critical to gauge the actual effect, and recognise the context and participants of the reported research study. This revision of Coldplay’s Vida la Vida summarises it well!
But if we go back to source theorists, evidence-based medicine, the precursor of EBHC is about integrating individual clinical expertise with the best available evidence. So the next challenge is how do we critically evaluate clinical expertise with the same rigour and transparency that we use for critically appraising the research evidence?
Let’s use the example of developing clinical guidelines . Ideally, clinical guidelines summarise the best available research evidence for a specific patient population and produce a set of recommendations for practice. Practically, clinicians do not have the time and expertise to review the research evidence. Government and professional organisations have stepped up and supported guideline development groups to do this important task. Usually, groups of clinical experts and systematic reviewers make recommendations for practice. However, we know little about this process of decision making and we are often not able to distinguish recommendations made on the basis of research evidence from those made on the basis of expert opinion. In these situations, it is difficult to understand and be critical of how experts inform recommendations. Often we do not know the clinical experts, their clinical experience, organisational context, or potential conflicts of interest. At the simplest level, we cannot evaluate whether the recommendations they have made from 25 years of work in a specialist team can apply for a generalist clinician working alone in the community.
Finally, patient values are often seen as the third component of EBHC; such that clinicians are exhorted to integrate their own clinical expertise, with patient values, and with the best research evidence. Again, we need to pose the same question – how can clinicians critically evaluate patient values? Ideally patients need to be fully informed of their clinical condition and of the benefits and risks of any proposed intervention – and some would argue that the research evidence needs to be presented in a format that patients can read and interpret it for this information. Often, patients’ logical reasoning is compromised when they are ill or in pain and there are time and resource limitations to being fully informed. There are also many different cultural interpretations and expectations of the extent to which healthcare professionals should and do give advice. So the extent to which patients are fully aware of their condition and their underlying values about health care is highly variable and will impact on their ability to communicate these complex concepts. Finally, the process for clinicians to understand and integrate this level of ‘evidence’ with their own clinical experience and the research evidence is potentially very complex.
In conclusion, it seems that while we have clear systems for critically appraising research evidence, we are less critical of clinical ‘evidence’ (experience) and probably less informed of patients’ ‘evidence’ (values). Therefore, it is difficult to truly integrate all three types of evidence for informed health care decision making.