Catching it early: netting the costs of earlier cancer diagnosis
When it comes to cancer, there’s one phrase that crops up again and again in TV and film: “The good news is, we’ve caught it early”. The idea that early diagnosis can make all the difference is a compelling one, and as a result it has been embraced as something of a magic bullet by health policy makers and politicians.
The hope is that by catching the disease in its earlier stages, treatments will be more effective, giving patients more time in good health or even a cure. As a result, last year NICE launched new guidance specifically designed to improve recognition and referral of patients with cancer. Since over 90% of these patients will first contact their GP, a lot of the focus is on ways to improve the detection of cancer in primary care.
By predicting the costs and outcomes of a variety of possible strategies, health economists can make recommendations about which options are most cost effective, and will provide the best outcomes for patients, given the limited budget.
However, despite the rhetoric, achieving changes here is by no means straightforward. Cancer is still a thankfully rare condition in the context of a GP’s daily workload, and symptoms are often non-specific, making identification challenging in many cases. GPs have limited time and the NHS has a limited budget; overzealous referral practices could result in cancer units being overwhelmed with extra patients who don’t need to be there, reducing the quality of care for those who do. It could also slow down the diagnosis of these extra patients, who instead have to go through the stress of an unnecessary cancer referral process before they can access the treatment they need.
As a result it is crucial to understand the consequences of any changes in practice, whether they come in the form of new diagnostic tests, clinical decision rules, or incentives such as the two week wait system.
By predicting the costs and outcomes of a variety of possible strategies, health economists can make recommendations about which options are most cost effective, and will provide the best outcomes for patients, given the limited budget. It is worth adding that a cost-ineffective treatment or test isn’t necessarily expensive, it could also mean that there are just better alternatives for the same cost. Similarly, expensive treatments can be cost-effective if patients get a lot of benefit from them.
Part one of measuring cost-effectiveness: what’s the effect?
In predicting these effects, researchers face a number of obstacles - the largest of which is estimating the health benefits that a new diagnostic technology will actually bring. Getting a diagnosis does not, in itself, offer any health benefit at all, beyond the reassurance of knowing what is wrong with you. It is typically only worth changing the diagnostic pathway if doing so will result in some change to the way patients are treated, which in turn gives them better outcomes (living longer or with a better quality of life). In the case of cancer diagnosis, the hope is that catching patients earlier will do this. However, while there is evidence for this in some cases, and it makes intuitive sense given the way the disease progresses, it isn’t always certain.
One reason for this is measurement. In cancer we often use five-year survival to assess changes in outcomes. But if a patient is diagnosed six months earlier, then even if that earlier diagnosis doesn’t change their overall survival at all, that statistic will still suggest that they survived for six months longer. This is one of the reasons cancer survival times seem to vary so much between countries - each country measures on a different scale, taking a different time point as the moment of diagnosis.
It is typically only worth changing the diagnostic pathway if doing so will result in some change to the way patients are treated, which in turn gives them better outcomes (living longer or with a better quality of life).
Alternatively, the evidence can be muddied by a “reverse-causality” effect, where it can look like early diagnosis is actually associated with worse outcomes. This happens because patients with more severe symptoms and a worse prognosis tend to be the most obvious cases, and therefore are diagnosed much sooner. In this case, disease severity is causing the difference in diagnostic delay, and not vice versa.
Another challenge is in measuring how well tests actually work. Unlike cancer treatments, which may have undergone hundreds of clinical trials before getting to a patient, diagnostic tests will have had comparatively few assessments. Studies will have been conducted to measure how well a test can distinguish between healthy and sick patients (case-control studies), but there are usually far fewer studies looking at a whole cross-section of potential cases (cohort studies).
In practice, though, patients will usually already have something wrong with them, and the test may not be as good at identifying who in a group of sick patients has cancer in particular, rather than some other condition.
As a result, it can be hard to measure how well a test will work from just the trial data, and this department is conducting ongoing studies investigating how to better measure and account for this.
It’s also important to identify correctly the consequences of any inaccurate results. While the result of missing cases is obvious, there is also a potential harm associated with a false positive result. This can include the psychological harm of testing, but also physical harms associated with more invasive testing, such as biopsies, and even surgery in some cases, before the diagnosis can be ruled out.
Part two: Is the cost worth it?
On top of this, researchers need to consider any additional costs. And the extra cost isn’t just the cost for each patient who has the condition, but for every test taken.
If a test costs £10, but the disease incidence is only one percent (a fairly common situation for cancer in primary care), then it will cost the NHS £1000 to identify each case using that test - the cost of all the negative tests as well as the positive.
And if there is an existing test that identifies 80% of cases, but only costs £5, then the extra cost of catching those additional patients by giving everyone the better test (compared to the existing one) then identifying that extra one patient in every 500 will cost £2,500 (the extra £5 being spent on those 500 patients compared to the cheaper test).
That’s a lot of money to spend just to get a diagnosis, especially if there is uncertainty over whether it will provide any long-term benefits.
If a test costs £10, but the disease incidence is only one percent (a fairly common situation for cancer in primary care), then it will cost the NHS £1000 to identify each case using that test - the cost of all the negative tests as well as the positive.
Besides this, it is highly likely the test will pick up as positive some patients who don’t have cancer. If the outcome of a positive test result is a referral and further testing, all of those costs are directly attributable to the test (that patient would not have been unnecessarily referred without the test), so they are added on.
If the next stage is an MRI or biopsy, then those costs could quickly run into hundreds of pounds that could have been spent on other patients.
The job of a health economist is to build a model of the likely effect on the NHS of introducing a change to clinical practice. This is to ensure that the NHS is providing the best possible care given its budget. In the case of cancer diagnosis in primary care, that calculation is complicated by the difficulty of measuring the potential benefits and the tendency of costs to escalate as diseases get less common. Adding new tests and clinical criteria may well have value in improving the our ability to catch cancer early, but it is important that we assess these changes systematically, or we could end up doing more harm than good.
What to read next
Why we should measure our own blood pressure
29 February 2016
Dr James Sheppard writes about how seeing a doctor could affect your blood pressure results.