Examining the effect of evaluation sample size on the sensitivity and specificity of COVID-19 diagnostic tests in practice: a simulation study.
Sammut-Powell C., Reynard C., Allen J., McDermott J., Braybrook J., Parisi R., Lasserson D., Body R., CONDOR steering committee None.
BACKGROUND: In response to the global COVID-19 pandemic, many in vitro diagnostic (IVD) tests for SARS-CoV-2 have been developed. Given the urgent clinical demand, researchers must balance the desire for precise estimates of sensitivity and specificity against the need for rapid implementation. To complement estimates of precision used for sample size calculations, we aimed to estimate the probability that an IVD will fail to perform to expected standards after implementation, following clinical studies with varying sample sizes. METHODS: We assumed that clinical validation study estimates met the 'desirable' performance (sensitivity 97%, specificity 99%) in the target product profile (TPP) published by the Medicines and Healthcare products Regulatory Agency (MHRA). To estimate the real-world impact of imprecision imposed by sample size we used Bayesian posterior calculations along with Monte Carlo simulations with 10,000 independent iterations of 5,000 participants. We varied the prevalence between 1 and 15% and the sample size between 30 and 2,000. For each sample size, we estimated the probability that diagnostic accuracy would fail to meet the TPP criteria after implementation. RESULTS: For a validation study that demonstrates 'desirable' sensitivity within a sample of 30 participants who test positive for COVID-19 using the reference standard, the probability that real-world performance will fail to meet the 'desirable' criteria is 10.7-13.5%, depending on prevalence. Theoretically, demonstrating the 'desirable' performance in 90 positive participants would reduce that probability to below 5%. A marked reduction in the probability of failure to hit 'desirable' specificity occurred between samples of 100 (19.1-21.5%) and 160 (4.3-4.8%) negative participants. There was little further improvement above sample sizes of 160 negative participants. CONCLUSION: Based on imprecision alone, small evaluation studies can lead to the acceptance of diagnostic tests which are likely to fail to meet performance targets when deployed. There is diminished return on uncertainty surrounding an accuracy estimate above a total sample size of 250 (90 positive and 160 negative).