Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

BACKGROUND: Quality improvement in health service programmes often involves introducing small, low-risk programmatic changes, such as modifying workflows, to incrementally improve outcomes, which accumulate over time to have significant overall gains in quality and efficiency. Although these changes are common in health services, they are rarely evaluated using statistically rigorous designs, partly because conventional randomised trials are perceived as inefficient for detecting modest effects. This study was motivated by a vision screening and referral programme and focuses on evaluating the modest but important impacts of low-risk QI interventions. METHODS: To advance data-driven approaches for achieving quality improvement, we used a simulation study to explore the use of Bayesian adaptive trial designs to compare two variants of programmatic changes that yield small improvements in outcomes. The study examined key adaptive design features, including interim analysis frequency, prior specification, and early stopping rules for efficacy and equivalence. Changes in error rates, sample size, and bias were assessed across scenarios with small effect sizes ranging from 0% to 5%. RESULTS: The findings were used to configure an ideal trial design that prioritises rapid identification of the more effective programmatic change while minimizing the risk of adopting an inferior one. The recommended trial design incorporates a sceptical prior, a stringent stopping rule for efficacy, and a relaxed criteria to stop for equivalence. Under this design, a marginal improvement as small as 1% could be detected with high probability using considerably fewer participants than would be required under conventional, fixed-size randomised controlled trials. CONCLUSIONS: Bayesian adaptive trial designs offer a feasible approach for evaluating low-risk, incremental QI interventions in high-throughput service settings. Their use may support more efficient, data-driven decision-making when modest improvements are expected and the consequences of incorrect adoption are limited. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-026-02780-w.

More information Original publication

DOI

10.1186/s12874-026-02780-w

Type

Journal article

Publication Date

2026-02-27T00:00:00+00:00

Volume

26

Keywords

Adaptive trial, Bayesian analysis, Interim analysis, Quality improvement, Simulation, Stopping rules