Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Aristotle’s conception of virtues as personal “excellences” and vices as personal “defects” might help explain how, why and to what extent clinical practice is evidence-based. (Image is sections of Hieronymus Bosch's painting "The Garden of Earthly Delights")

Reblogged from the Centre for Evidence-Based Medicine. Read the original post, and comments, here.

 

On 27th January 2016, 45 people gathered at Green Templeton College, Oxford* for a workshop on ‘Virtues and Vices in Evidence-Based Clinical Practice’.

The impetus for the workshop was the idea that Aristotle’s conception of virtues as personal “excellences” and vices as personal “defects” might help explain how, why and to what extent clinical practice is evidence-based. This blog explains why we sought to bring virtue theory alongside evidence-based health care, what we talked about (professional virtues, intellectual virtues and vices, the psychology of guideline adoption and professional vices), and possible next steps for this interdisciplinary field.

Why align virtue [/vice] ethics and evidence-based health care?

Much has been written on the research-practice gap. We have mountains of evidence-based guidelines, but clinicians rarely follow them. Research on this gap in the medical field has been dominated by trials of behaviourist interventions predicated on a crude stimulus-response model (e.g. how feedback on performance may increase guideline adherence). Social scientists have argued that such models have limited value in situations characterised by a complex interplay of human, organisational and policy influences, and have proposed more complex, emergent, and interactive models of the research-practice link.

Both linear and complex models of ‘getting research into practice’ miss important subtleties about how humans think.  They also overlook the fundamentally social and moral nature of clinical practice. The doctor/nurse is not a dispassionate information processer but a caring professional, guided by the ethical question “what is the right thing to do for this patient?”.

Accordingly, the workshop sought to focus on philosophical and psychological aspects of the research-practice gap. How humans behave is underpinned by how they think. Thinking styles are in turn influenced by underlying cognitive mechanisms (including biases), intellectual virtues (e.g. conscientiousness, open-mindedness) and intellectual vices(dogmatism, closed-mindedness, prejudice). Clinical practice is also strongly influenced by professional virtues (e.g. altruism, integrity, respect for confidentiality), and, regrettably, also sometimes by what might be termed professional vices (e.g. tendency to close ranks, unwillingness to own up to mistakes).

Professional virtues

Iona Heath, retired general practitioner and past President of the Royal College of General Practitioners, gave a moving (and reassuringly old-fashioned) account of professional virtue in clinical practice.  She reminded us that “The word practice is well-chosen, because [clinical work] is neither a science nor an art but a practice that draws on both [science and art] but relies on an Aristotelian practical wisdom.”

Iona quoted the The Polish-American philosopher and scientist, Alfred Korzybski, who highlighted the gap between the map and the territory.  A map is not the territory it represents.  The experience of looking at a map, however detailed, is nothing like the experience of walking through a landscape.  Similarly, the “map” of medical science, however evidence-based, is nothing like the “territory” of human suffering. There is always a gap between the textbook description of disease and what Iona called the “unfathomable mystery” of how illness affects the individual patient.

The discourse of evidence-based clinical practice is focused on the “map” – that is, the rational arguments of biomedical science. It depicts a particular set of virtues, including commitment to the current state of [research-based] knowledge; integrity and thoroughness in applying that knowledge to the individual patient; and a similar commitment to implementing research findings consistently across populations so as to reduce variation in standards of clinical care.

The implicit conflation of professional virtue with the assiduous use of the tools and techniques of clinical epidemiology has produced a persisting unease among clinicians. A clinician’s accumulated wisdom from years of practice frequently brings him or her to challenge the simple and algorithmic (“if-then”) decision-making of the evidence-based guideline. Indeed, as most clinicians know all too well, the inflexible and overzealous application of guidelines without regard to the detail of individual circumstances can be both harmful and inhumane.

Iona read us a haunting excerpt from TS Eliot’s ‘Little Gidding’, which describes:

            “- the shame
 Of motives late revealed, and the awareness
Of things ill done and done to others’ harm
          Which once you took for exercise of virtue.”

Perhaps, she suggested, it is time to remind ourselves of those professional virtues that relate less to the science of medicine and more to the uniqueness of the patient’s experience of illness: continuity of care, imagination, hope, respect, listening, witnessing and simply being present.

When dealing with a patient whose problems are complex and embedded in a unique life narrative, the doctor’s role may be primarily to help co-construct a coherent story – and one that is more about coping than curing.  Clinical care rests crucially on the teller-listener relationship and how (that is, how hopefully, how respectfully, how attentively) the unfolding story is heard over time.

Notwithstanding the importance of the “map”, the key skill in applying evidence-based guidelines is to apply situational judgement ensure that, at every turn in the story, the technical does not invade the existential.  Situational judgement is particularly crucial in specialties characterised by a high degree of uncertainty, such as general practice. Time and again, the evidence-based guideline proves ambiguous, incomplete, or throws light on a similar but not identical problem to the one that needs solving right now. The virtuous practitioner is a ‘bricoleur’, making the best use of the tools to hand, adapting them to suit the situation.

In sum, Iona’s talk on professional virtues depicted a nuanced balancing act between (on the one hand) the scientific-rational virtues of prudence, thoroughness and commitment to the consistent application of research findings and (on the other hand) the humanistic-narrative virtues of inter-subjectivity, imagination, hope, respect and so on. The ultimate professional virtue is, perhaps, to get the balance right in an uncertain and ever-changing territory, avoiding both the Scylla of uncritical acceptance of evidence-based guidance and the Charybdis of ignoring (or failing to seek) such guidance.

Intellectual virtues and vices

Quassim Cassam, Professor of Philosophy at the University of Warwick, began his talk by talking about why humans resist the adoption of innovations (of which the adoption of evidence-based guidelines by health professionals is one pertinent example).  The first question that must be asked in such circumstances, he suggested, is “is the resistance warranted?”.

To explain unwarranted resistance, we might call upon the intellectual vices (explained below) – but these will not explain warranted variation (where the professional believes, correctly, that the guideline is wrong, or not relevant, or inappropriate). Yet the question “is this resistance warranted?” begs the question “what counts as a warrant?” – not an easy philosophical question to answer.  Resistance is warranted if it is reasonable in relation to a certain body of evidence and a certain range of reasons – but who is to judge what that evidence or those reasons are (or what “reasonable” is in a particular circumstance)?

Importantly, the distinction between warranted and unwarranted resistance is value-laden. There is no neutral perspective. Furthermore, the judgement as to whether something is warranted or not is value-laden – and hence is not a scientific judgement but (in a sense) a political one. Listening to Quassim on this topic, I was reminded of Huw Davies’ comment at a previous conference: “Evidence is what powerful people say it is”.

Leaving aside the crucial question of who should judge whether resistance is warranted, Quassim proposed that in some circumstances, clinicians’ propensity to adopt an evidence-based guideline might be explained by intellectual virtues and vices – that is, qualities that pertain to thinking, reasoning and belief-formation.

Following Aristotle, philosopher Philippa Foot describes virtues as ‘beneficial characteristics  that a human being has to have, for his own sake, and that of his fellows’. Intellectual virtues are intellectual excellences, aspects of mind that promote effective and responsible intellectual inquiry; they include carefulness, flexibility, open-mindedness, conscientiousness and creativity.

Intellectual vices, by contrast, are traits that inhibit effective and responsible intellectual inquiry (and hence effective practice). They include conformity, carelessness, rigidity, prejudice, closed-mindedness, dogmatism, arrogance, complacency and arrogance.

Quassim introduced his ‘EPIC’ framework to consider resistance to the adoption of an idea, practice or object.

‘E’ is for Environment: relevant environmental factors including the individual’s social, cultural and professional background.

‘P’ is for Personality – that is, the tendency for someone to be disposed to behave a certain way under certain eliciting conditions – along with internal factors such as motives, emotions and cognitions. Some people are, all other things being equal, more resistant to change than others (and have been disparagingly referred to as ‘laggards’).

‘I’ is for Identity: novel practices might be resisted because they are inconsistent with the individual’s social or professional identity – that is, their sense of who they are.

‘C’ is for Cognitive architecture: the way the mind organizes processes and interprets information – a topic addressed more fully by Nick Fahy who spoke later in the day.

There is a danger in calling on “vices” to explain unwarranted resistance to guideline adoption. If a vice is an ingrained personality trait, it will not be amenable to change.  Evidence suggests that we are not equally resistant to all new ideas: we resist some more than others. Quassim suggested it is better to think of intellectual vices as remediable thinking traits rather than irremediable personality traits.

Questioners from the floor were keen to emphasise Quassim’s critical caveat: adoption of innovations is only virtuous (and resistance to adoption is only explained in terms of vices) if and to the extent that such resistance is unwarranted. Many of the tensions in the medical and nursing professions around evidence-based practice currently turn on the argument as to whether this is the case. The distinction between warranted and unwarranted resistance merits further investigation.

The psychology of guideline adoption

Nick Fahy, who has a background in policy and is now studying for a DPhil (on the topic of resistance to guideline adoption) at Oxford, took forward the theme of ‘professional behaviour change’, shifting perspectives from philosophy to psychology, as a discipline that has empirically explored many of the issues described by Quassim from a different perspective.  This section of the blog was penned by Nick.

There is an abundance of theories that are potentially relevant to how professionals think and act with regard to their practice.  Daniel Kahneman’s well-known book “Thinking, Fast and Slow” outlines what he describes as the two systems, system one and system two.  System one is effortless, quick to recognise patterns, and works in parallel to process lots of information simultaneously, but has – as Kahneman describes – a number of ‘biases’ in how it functions.  System two is conscious, considered, and focused on one thing at a time, but requires effort.  A more visual way of thinking about the two systems is to think of the elephant and the rider, as Ron Borland has done with his work on hard-to-maintain behaviour.

There are many potential biases and ways of thinking from Kahneman’s work that are potentially relevant, but Nick focused on one area in particular; overconfidence about our professional judgement.  After all, part of being an expert is moving from the careful, step-by-step reasoning of the student to the instant recognition of a pattern by the expert – from system two to system one, in Kahneman’s terms.  But being confident in that expert recognition doesn’t necessarily mean that we’re right, and here is where empirical work has helped to clarify when that confidence is likely to be justified.  When our judgement has been built up on the basis of good feedback from much practice in predictable environments, then it is likely to be reliable – but when the environment is not regular, or the feedback is not clear or quick, then it does not provide such a good basis.  For example, anaesthetists will mostly get pretty clear and quick feedback about their work in a fairly controlled environment.  Radiologists screening for breast cancer, for example, are likely to only get feedback that is much more delayed and unclear, under which circumstances expert intuition is less likely to be accurate.

That doesn’t stop us being confident in our judgement.  As Kahneman describes, our confidence is more related to the coherence of the stories that we construct, and to the degree to which we intervene in them.  When we take skilled action (such as diagnosing, or operating) in relation to a problem, it increases our perception that the outcome is amenable to skill – which it may or may not be.  Not acting may be just as effective, but doesn’t leave us feeling as confident.

Anders Ericsson has led work on how we develop expertise and expert levels of performance – leading to the well-known “10,000 hours of work” figure, but in more detail underlining the importance of the motivation to do those 10,000 hours of work, the supportive environment needed in which to develop and the expert instruction and feedback required to improve.  Of particular interest for a constantly-changing field such as medicine is the process of ‘unlearning’.  The corollary of expertise, of having developed a skill so far that it has become automatic (again, system one), is that in order to change it we have to unpack it again; unlearn it, bring it back into conscious thought, and relearn the new approach.  And that, in the short term, does not mean performing whatever skill it is better; on the contrary, in the short term, our performance will be worse.  How willing are we as professionals to tolerate that difficult period of relative unskillfulness? And how ready are our employers?  That social aspect is itself important, as work by Robert Cialdini in particular has shown; social influence is a strong factor on what we do, and stronger than we think it is.

Given the breadth of potentially relevant theories, the actual application of psychology so far to implementation research has been surprisingly narrow.  Most research has only drawn on single psychological theories, most commonly  the Theory of Planned Behaviour proposed by Ajzen.  There has been a limited strand of research seeking to apply multiple theories on an individualised basis, but the main tool for using multiple theories has been the compendium of theories developed by Susan Michie and colleagues, the Theoretical Domains Framework.

Nick concluded by exploring some of the limitations of psychological research so far.  In many ways, the aim of much psychological research to be empirical and universal has led to theories and data that is quite abstracted from the real world of clinical care.  Our ‘biases’ and heuristics, for example, may be excellent adaptations to the rapid decision-making that clinicians need to make every day.  Hopefully in the future psychological research can provide insights that are better integrated into the real, human context in which medical professionals do their work.

Professional vices

I took on this final topic myself, speaking as a medical doctor with an interest in narrative ethics. A colleague who has worked for many years in senior management in the NHS had welcomed the idea of a conference session on professional vices, which would be an opportunity to explore why doctors always seem to play difficult when anyone suggests a change that threatens their professional jurisdiction – even when that change might benefit patients.  I didn’t agree entirely with this framing but I could see where my colleague was coming from.

First, there is conceit. Because medicine in particular is a high-status, socially-sanctioned profession; because when we’re ill we want to put ourselves in the hands of a formally qualified doctor; and because the role of doctor is somehow associated with “goodness”, there is a danger that the doctor starts to believe that he or she is intrinsically, morally,good.

Second, there is overconfidence. Because, in general, doctors are more knowledgeable than patients (though this is by no means always the case), there is a tendency for them to assume that in every aspect of decision-making, their knowledge is worth more than that of the patient – and indeed, more than that of the nurse, the occupational therapist, the healthcare assistant and the carer – not to mention the chief executive, the middle manager, the non-executive director or the investigative journalist.

There is also a tendency for a doctor to assume that because he or she is an expert in topic A, they’re also pretty good on topics B, C and D too. Kahneman has set out the cognitive biases that arise from over-confidence, and which Nick mentioned briefly in his talk on psychology. These include the illusion of understanding – we think we know more than we do; the illusion of validity, where we are confident because we have a coherent explanatory story, but don’t allow for what we don’t know; and the illusion of control: because we take action, we think that means the outcome is more amenable than if we don’t act. Overconfidence also gives us a tendency to fail to recognize inherently unpredictable environments, assuming wrongly that everything can be analysed with reference to our own narrow knowledge base.

Overconfidence and the cognitive biases it engenders account for an increasingly common and serious problem in modern medicine: over-medicalisation. This category includes over-investigation, over-diagnosis, over-treatment (including over-assiduous prolongation of death by the aggressive imposition of life support), and over-screening. To take the last of these as an example, an “expert” on screening tends to conceptualise every disease as having a latent phase which, if detected and treated early enough, will save the patient from some dire outcome or other. The vice of overconfidence engenders a dangerous triad of cognitive biases: the illusion of understanding (the “expert” assumes that they know far more about the natural history of the disease than is actually known), the illusion of validity (the “expert” assumes that their mental model of causation and prevention is correct); and the illusion of control (the “expert” assumes that active screening will improve outcomes, whereas not screening a population is likely to lead to harm). As a result, whole populations are put through screening programmes that not only fail to detect anything preventable but also bring the anxieties, hassle and side effects of false positive results and unnecessary treatment.

The third vice I spoke about was over-ambition. As Aristotle pointed out, being ambitious isn’t itself a vice, but beingover-ambitious to the detriment of others is.  Those who have graduated in medicine, nursing or some other profession will know that brings a special position in society – and also that the profession itself is hierarchical and internally competitive. Some will have faced the choice between being compassionate, humble and a good team player on the one hand and “getting ahead” on the other hand – sometimes to the extent of being tempted to exploit the illnesses of patients in small but troubling ways for their own advancement.

Others at the workshop – and those connecting via Twitter on the hashtag #VVEBP –suggested more professional vices, including excessive loyalty to others in the profession, failure to question practice, and a tendency to hide behind jargon.

Discussion, conclusion and next steps

There was much discussion on all the above presentations. One theme that seemed to emerge from this discussion was that the distinction between professional and intellectual vices is not so clear-cut as we had originally assumed. Perhaps some virtues and some vices are more ‘professional’ (moral) and some more intellectual than others, but there are few if any that can be said to occur only in professionals.

Another strong theme from the floor discussion was that the implementation (and non-implementation) of evidence based clinical practice (however defined) is intimately tied up with power and conflicts of interests. An analysis of ‘virtues’ and ‘vices’ makes more sense when couched in relation to the prevailing power relations among and between the professions and institutions involved.

This preliminary meeting affirmed our view that the study of both virtues and vices promises to throw new light onto the study of adoption of innovation and delivery of evidence-based clinical care. Various papers are now in preparation so watch this space for news of publication.

 

*  Green Templeton College is the University of Oxford’s youngest college, founded in 2008 from the merger of Green College (medicine) and Templeton College (business and management), and with strong representation from the social sciences. The college promotes (among other things) interdisciplinary inquiry at the interface between clinical practice, healthcare delivery and healthcare policy. 

Opinions expressed are those of the author/s and not of the University of Oxford. Readers' comments will be moderated - see our guidelines for further information.