Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Sixth year primary care medical students Sarah Peters and Archie Lodge joined the Centre for Evidence-Based Medicine for a three-week period as part of a special study module to improve their knowledge of evidence-based medicine. In this blog, they discuss their project, focusing on evaluating available techniques to identify research irregularities that require further scrutiny and the role they play when conducting systematic reviews.

Sixth year med students, Sarah Peters and Archie Lodge

The ongoing pandemic has brought the translation of health and medical research to the forefront of public discourse to an extent unparalleled in recent times. As two medical students on the cusp of being doctors, we felt there was never a more important time to enhance our Evidence-Based Medicine (EBM) skills for both our clinical and academic careers. Fortunately, we had just such an opportunity, joining Dr. David Nunan at the Centre of Evidence-Based Medicine (CEBM) for three weeks on an EBM training and research project as part of our Special Studies Module.

WHAT’S THE PROBLEM?

An important issue highlighted by the pandemic was how we evaluate studies for inconsistencies in their procedures and data as part of the systematic review process. Systematic reviews and meta-analyses use standardised methods to find, evaluate and synthesise research, often in order to answer a well-defined clinical question. As part of this process, their authors use tools such as GRADE to rate their confidence in their conclusions, based on an assessment of aspects of the included studies such as risk of bias and heterogeneity (how much variation there is between the studies, above and beyond expected measurement error). But there is currently no standardised way to evaluate the included studies for procedural, textual, image and data irregularities which may be present because of carelessness, poor scientific practice or deliberate fabrication.

The pandemic has reminded us of the importance of this: several influential studies on Hydroxychloroquine and Ivermectin for treating Covid-19 have been retracted after publication because they were based on fraudulent data. Systematic reviews based on these studies inadvertently legitimise their data and findings and could lead to incorrect clinical recommendations with a very real possibility of patient harm as well as economic waste.

There is clearly a need for systematic reviewers to consider research irregularities as part of their review process, but there isn’t an obvious approach to doing so. There are many possible techniques to pick up irregularities in research, with some being useful for different types of data or at different stages of the review process. Thus, there is a need to compile and categorise these techniques from the perspective of how they could be applied to a systematic review.

WHAT DID WE DO?

We started our work with some tutorials from David who refreshed our memory on the core principles of EBM, focusing on the systematic review process and open science practices. In addition to our own reading around the topic, we also spoke to Dr Jeffrey Aronson about mathematical techniques, such as Benford’s Law (which is an observation that in real-world datasets spanning several orders of magnitude, the first and second digits are likely to be small).

We then started to work on the protocol for our own restricted systematic review with David and Jeff, trying to capture and classify the methods which currently exist to detect procedural, textual, image and data irregularities in publications. Reviews of methods used to detect misconduct do already exist, and as a starting point we aim to update these. We also decided to broaden the scope of previous reviews, feeling that it was important that our terminology moved away from any accusation of deliberate wrongdoing, and so added more neutral terms such as ‘irregularities’. We also decided to try to capture relevant preprints as well as published studies. However, what we particularly hope this project will do is to make progress towards taxonomising techniques for detecting irregularities that will be more applicable to those writing systematic reviews than the current categorisations, for example, by detailing where and how specific techniques might be used in the systematic review process.

We registered our project and published the protocol on the Open Science Framework.

WHAT DID WE LEARN?

During our time with the CEBM, we gained insights into the process of systematic reviews and had the opportunity to get hands-on by starting our own review. It was great to be able to learn from experts in the field while also hopefully contributing something ourselves, and we will start our careers with a new appreciation for the process and challenges of consolidating evidence and translating it into clinical practice.

Opinions expressed are those of the author/s and not of the University of Oxford. Readers' comments will be moderated - see our guidelines for further information.