Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Is catching cancer early worth the extra cost? (Image of pound signs and an person having their lungs x-rayed). © University of Oxford

When it comes to cancer, there’s one phrase that crops up again and again in TV and film: “The good news is, we’ve caught it early”. The idea that early diagnosis can make all the difference is a compelling one, and as a result it has been embraced as something of a magic bullet by health policy makers and politicians.

The hope is that by catching the disease in its earlier stages, treatments will be more effective, giving patients more time in good health or even a cure. As a result, last year NICE launched new guidance specifically designed to improve recognition and referral of patients with cancer. Since over 90% of these patients will first contact their GP, a lot of the focus is on ways to improve the detection of cancer in primary care.

By predicting the costs and outcomes of a variety of possible strategies, health economists can make recommendations about which options are most cost effective, and will provide the best outcomes for patients, given the limited budget.

However, despite the rhetoric, achieving changes here is by no means straightforward. Cancer is still a thankfully rare condition in the context of a GP’s daily workload, and symptoms are often non-specific, making identification challenging in many cases. GPs have limited time and the NHS has a limited budget; overzealous referral practices could result in cancer units being overwhelmed with extra patients who don’t need to be there, reducing the quality of care for those who do. It could also slow down the diagnosis of these extra patients, who instead have to go through the stress of an unnecessary cancer referral process before they can access the treatment they need.

As a result it is crucial to understand the consequences of any changes in practice, whether they come in the form of new diagnostic tests, clinical decision rules, or incentives such as the two week wait system.

By predicting the costs and outcomes of a variety of possible strategies, health economists can make recommendations about which options are most cost effective, and will provide the best outcomes for patients, given the limited budget. It is worth adding that a cost-ineffective treatment or test isn’t necessarily expensive, it could also mean that there are just better alternatives for the same cost. Similarly, expensive treatments can be cost-effective if patients get a lot of benefit from them.

Part one of measuring cost-effectiveness: what’s the effect?

In predicting these effects, researchers face a number of obstacles - the largest of which is estimating the health benefits that a new diagnostic technology will actually bring. Getting a diagnosis does not, in itself, offer any health benefit at all, beyond the reassurance of knowing what is wrong with you. It is typically only worth changing the diagnostic pathway if doing so will result in some change to the way patients are treated, which in turn gives them better outcomes (living longer or with a better quality of life). In the case of cancer diagnosis, the hope is that catching patients earlier will do this. However, while there is evidence for this in some cases, and it makes intuitive sense given the way the disease progresses, it isn’t always certain.

One reason for this is measurement. In cancer we often use five-year survival to assess changes in outcomes. But if a patient is diagnosed six months earlier, then even if that earlier diagnosis doesn’t change their overall survival at all, that statistic will still suggest that they survived for six months longer. This is one of the reasons cancer survival times seem to vary so much between countries - each country measures on a different scale, taking a different time point as the moment of diagnosis.

It is typically only worth changing the diagnostic pathway if doing so will result in some change to the way patients are treated, which in turn gives them better outcomes (living longer or with a better quality of life).

Alternatively, the evidence can be muddied by a “reverse-causality” effect, where it can look like early diagnosis is actually associated with worse outcomes. This happens because patients with more severe symptoms and a worse prognosis tend to be the most obvious cases, and therefore are diagnosed much sooner. In this case, disease severity is causing the difference in diagnostic delay, and not vice versa.

Another challenge is in measuring how well tests actually work. Unlike cancer treatments, which may have undergone hundreds of clinical trials before getting to a patient, diagnostic tests will have had comparatively few assessments. Studies will have been conducted to measure how well a test can distinguish between healthy and sick patients (case-control studies), but there are usually far fewer studies looking at a whole cross-section of potential cases (cohort studies).

In practice, though, patients will usually already have something wrong with them, and the test may not be as good at identifying who in a group of sick patients has cancer in particular, rather than some other condition.

As a result, it can be hard to measure how well a test will work from just the trial data, and this department is conducting ongoing studies investigating how to better measure and account for this.

It’s also important to identify correctly the consequences of any inaccurate results. While the result of missing cases is obvious, there is also a potential harm associated with a false positive result. This can include the psychological harm of testing, but also physical harms associated with more invasive testing, such as biopsies, and even surgery in some cases, before the diagnosis can be ruled out.

Part two: Is the cost worth it?

On top of this, researchers need to consider any additional costs. And the extra cost isn’t just the cost for each patient who has the condition, but for every test taken.

If a test costs £10, but the disease incidence is only one percent (a fairly common situation for cancer in primary care), then it will cost the NHS £1000 to identify each case using that test - the cost of all the negative tests as well as the positive.

And if there is an existing test that identifies 80% of cases, but only costs £5, then the extra cost of catching those additional patients by giving everyone the better test (compared to the existing one) then identifying that extra one patient in every 500 will cost £2,500 (the extra £5 being spent on those 500 patients compared to the cheaper test).

That’s a lot of money to spend just to get a diagnosis, especially if there is uncertainty over whether it will provide any long-term benefits.

If a test costs £10, but the disease incidence is only one percent (a fairly common situation for cancer in primary care), then it will cost the NHS £1000 to identify each case using that test - the cost of all the negative tests as well as the positive.

Besides this, it is highly likely the test will pick up as positive some patients who don’t have cancer. If the outcome of a positive test result is a referral and further testing, all of those costs are directly attributable to the test (that patient would not have been unnecessarily referred without the test), so they are added on.

If the next stage is an MRI or biopsy, then those costs could quickly run into hundreds of pounds that could have been spent on other patients.

The job of a health economist is to build a model of the likely effect on the NHS of introducing a change to clinical practice. This is to ensure that the NHS is providing the best possible care given its budget. In the case of cancer diagnosis in primary care, that calculation is complicated by the difficulty of measuring the potential benefits and the tendency of costs to escalate as diseases get less common. Adding new tests and clinical criteria may well have value in improving the our ability to catch cancer early, but it is important that we assess these changes systematically, or we could end up doing more harm than good.

Opinions expressed are those of the author/s and not of the University of Oxford. Readers' comments will be moderated - see our guidelines for further information.

 

Comments

Jonathan Miller, South West Cancer Network Manager says:
Tuesday, 23 May 2017, 4.29 pm

Dear Lucy
Is there any information available hat assesses the costs as you set out. We are looking at increasing the number of diagnostics tests carried out to diagnose cancer earlier. We have evidence from CRUK (Saving Lives, Averting Costs) that early cancer is cheaper to treat but we need to know the cost of the additional tests needed to get this earlier diagnosis.

Thanks

Lucy Abel says:
Thursday, 1 June 2017, 10.41 am

Hi Jonathan,

Thanks for your comment.

There isn't much research in this area as yet, unfortunately. The closest approximations are in screening, particularly in colorectal and breast cancers. These kinds of cost-effectiveness studies will give some indication of where the extra costs come from. It's usually not the test itself so much as the extra intervention it leads to that adds cost, particularly in primary care. For example, a positive FOB test will mean a colonoscopy, so false positive costs are the cost of the test plus the colonoscopy, which is much more expensive.

The CRUK analysis is interesting. One problem with it is that we don't currently have any useful methods for assessing the benefits of earlier diagnosis, so whether it actually helps is somewhat controversial and likely to have been overestimated by CRUK. The main issue, as I described above, is that we don't know how much extra survival is due to better treatment (ie, additional survival) and how much is just from longer term monitoring (natural survival). CRUK tries to overcome this by assessing outcomes by stage, rather than survival time, but the relationship between stage and survival isn't always that straightforward. Patients don't progress through stages in a linear fashion, so while stage 3 results in worse outcomes than stage 1, it doesn't follow that it would be possible to diagnose all stage 3s at stage 1 (if onset is rapid and initially symptomless, for example), or even that a patient who is currently diagnosed at stage 3 would necessarily have better outcomes at stage 1, and wouldn't just rapidly progress anyway.

These are worst case scenarios, but ignoring them will result in overestimating the cost-effectiveness of diagnostics. The only evidence I've seen on this from another source is from York, looking at the impact of budget changes on health outcomes (Claxton, Methods for estimating the cost-effectiveness threshold). The essential point from that piece is that improving survival in cancer will probably never be cost-saving - recurrence is just too frequent. Since this work is based on actual expenditure rather than assumptions about survival and recurrence, I think it's likely to be more accurate.

Essentially if a diagnostic is claiming it will increase early diagnosis it will probably increase costs, in the long term as well as the short. If you are aspiring to cost savings, then introduction of the test followed by rigorous evaluation of outcomes (how many fewer patients do you expect to have being treated for advanced cancers after 5 years, for example?) is essential to confirm CRUK's estimates.

Add comment

Please add your comment in the box below.

Please answer the question below, this is to make sure that you are a human, rather than a computer.

Question: Write the number 5 ?

Your answer: