Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

A diagnostic test-specific checklist to assess decision modelling has been developed and piloted.

A review of methods for modelling the cost-effectiveness of new diagnostic tests in the UK NHS suggests that while the majority of analytical models perform well, many fall short of good research practices and principles.

Decision analytic modelling has been increasingly used by researchers and NHS decision makers to evaluate both the cost and health benefit of introducing a new diagnostic test into practice, yet this methodology is widely recognised as being under development.  

Following a review of existing decision modelling checklists and guidelines for assessing the cost-effectiveness of health interventions, a team from Oxford University and the University of Leeds have developed a modified checklist specifically for diagnostics and applied it to 55 published diagnostic models that were produced for health technology assessment.

Published in PharmacoEconomics Open, the work was funded by the National Institute for Health Research (NIHR) Community Healthcare MedTech and In Vitro Diagnostics Co-operative.

Study author Dr Yaling Yang, a Senior Researcher in Health Economics at Oxford University’s Nuffield Department of Primary Care Health Sciences, said “The models we assessed in Health Technology Assessments tended to be of relatively high quality, but also suffered key problems including lacking justification of comparators, lacking model validation, insufficient efforts to examine structural uncertainty and obtain treatment effects data.”

Co-author Ms. Lucy Abel, a health economist pointed out “There was a particular issue when assessing the effectiveness of using several tests to diagnose a health condition, which can be commonplace in clinical practice, where two-thirds of the reports we looked at assumed independence of tests in sequence.”

Assuming independence is a statistical approach that ignores the association between different test results, making tests appear more accurate than they really are.

The researchers also found that 50% of HTA reports did not justify the alternative tests that were selected for comparison and 80% of reports did not clearly report the evidence they used to model treatment effect data – how much a correct test result affects a patient’s health care, and ultimately their health outcomes.  

Developed to consider some of the difficulties and complexities of modelling cost-effectiveness in healthcare settings, the modified checklist comes alongside calls for better tools and guidelines for evaluating the economics of new diagnostic tests.

Access the full paper and assessment checklist:

Use of Decision Modelling in Economic Evaluations of Diagnostic Tests: An Appraisal and Review of Health Technology Assessments in the UK
Yaling Yang, Lucy Abel, James Buchanan, Thomas Fanshawe, Bethany Shinkins.
PharmacoEconomics Open 2018 https://doi.org/10.1007/s41669-018-0109-9

 

Contact our communications team

Opinions expressed are those of the authors and not of Oxford University. Readers' comments will be moderated - see our guidelines for further information.

Oxford research team: