Search results
Found 18261 matches for
We lead multidisciplinary applied research and training to rethink the way health care is delivered in general practice and across the community.
Positive messages may reduce patient pain: A meta-analysis
Introduction Current treatments for pain have limited benefits and worrying side effects. Some studies suggest that pain is reduced when clinicians deliver positive messages. However, the effects of positive messages are heterogeneous and have not been subject to meta-analysis. We aimed to estimate the efficacy of positive messages for pain reduction. Methods We included randomized trials of the effects of positive messages in a subset of the studies included in a recent systematic review of context factors for treating pain. Several electronic databases were searched. Reference lists of relevant studies were also searched. Two authors independently undertook study selection, data extraction, risk of bias assessment, and analyses. Our primary outcome measures were differences in patient- or observer-reported pain between groups who were given positive messages and those who were not. Results Of the 16 randomized trials (1703 patients) that met the inclusion criteria, 12 trials had sufficient data for meta-analysis. The pooled standardized effect size was −0.31 (95% confidence interval [CI] −0.61 to −0.01, p = 0.04, I2 = 82%). The effect size remained positive but not statistically significant after we excluded studies considered to have a high risk of bias (standard effect size −0.17, 95% CI −0.54 to 0.19, P = 0.36, I2 = 84%). Conclusion Care of patients with chronic or acute pain may be enhanced when clinicians deliver positive messages about possible clinical outcomes. However, we have identified several limitations of the present study that suggest caution when interpreting the results. We recommend further high-quality studies to confirm (or falsify) our result.
Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: A methodological review of health technology assessments
Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
The influence of maternally derived antibody and infant age at vaccination on infant vaccine responses: An individual participant meta-analysis
IMPORTANCE The design of infant immunization schedules requires an understanding of the factors that determine the immune response to each vaccine antigen. DATA SOURCES Deidentified individual participant data from GlaxoSmithKline clinical trials were obtained through Clinical Study Data Request. The data were requested on January 2, 2015, and final data were received on April 11, 2016. STUDY SELECTION Immunogenicity trials of licensed or unlicensed vaccines administered to infants were included if antibody concentrations in infants were measured prior to the first dose of vaccine. DATA EXTRACTION AND SYNTHESIS The databasewas examined; studies that appeared to have appropriate data were reviewed. MAIN OUTCOMES AND MEASURES Antigen-specific antibody concentration measured 1 month after priming vaccine doses, before booster vaccination, and 1 month after booster vaccine doses. RESULTS A total of 7630 infants from 32 studies in 17 countries were included. Mean (SD) age at baseline was 9.0 (2.3) weeks; 3906 (51.2%) were boys. Preexisting maternal antibody inhibited infant antibody responses to priming doses for 20 of 21 antigens. The largest effects were observed for inactivated polio vaccine, where 2-fold higher maternal antibody concentrations resulted in 20%to 28%lower postvaccination antibody concentration (geometric mean ratios [GMRs], type 1: 0.80; 95%CI, 0.78-0.83; type 2: 0.72; 95%CI, 0.69-0.74; type 3: 0.78; 95%CI, 0.75-0.82). For acellular pertussis antigens, 2-fold higher maternal antibody was associated with 11%lower postvaccination antibody for pertussis toxoid (GMR, 0.89; 95%CI, 0.87-0.90) and filamentous hemagglutinin (GMR, 0.89; 95%CI, 0.88-0.90) and 22%lower pertactin antibody (GMR, 0.78; 95%CI, 0.77-0.80). For tetanus and diphtheria, these estimates were 13%(GMR, 0.87; 95%CI, 0.86-0.88) and 24%(GMR, 0.76; 95%CI, 0.74-0.77), respectively. The influence of maternal antibody was still evident in reduced responses to booster doses of acellular pertussis, inactivated polio, and diphtheria vaccines at 12 to 24 months of age. Children who were older when first immunized had higher antibody responses to priming doses for 18 of 21 antigens, after adjusting for the effect of maternal antibody concentrations. The largest effect was seen for polyribosylribitol phosphate antibody, where responses were 71%higher per month (GMR, 1.71; 95%CI, 1.52-1.92). CONCLUSIONS AND RELEVANCE Maternal antibody concentrations and infant age at first vaccination both influence infant vaccine responses. These effects are seen for almost all vaccines contained in global immunization programs and influence immune response for some vaccines even at the age of 24 months. These data highlight the potential for maternal immunization strategies to influence established infant programs.
Prediction of violent crime on discharge from secure psychiatric hospitals: A clinical prediction rule (FoVOx)
Background Current approaches to assess violence risk in secure hospitals are resource intensive, limited by accuracy and authorship bias and may have reached a performance ceiling. This study seeks to develop scalable predictive models for violent offending following discharge from secure psychiatric hospitals. Methods We identified all patients discharged from secure hospitals in Sweden between January 1, 1992 and December 31, 2013. Using multiple Cox regression, pre-specified criminal, sociodemographic, and clinical risk factors were included in a model that was tested for discrimination and calibration in the prediction of violent crime at 12 and 24 months post-discharge. Risk cut-offs were pre-specified at 5% (low vs. medium) and 20% (medium vs. high). Results We identified 2248 patients with 2933 discharges into community settings. We developed a 12-item model with good measures of calibration and discrimination (area under the curve = 0.77 at 12 and 24 months). At 24 months post-discharge, using the 5% cut-off, sensitivity was 96% and specificity was 21%. Positive and negative predictive values were 19% and 97%, respectively. Using the 20% cut-off, sensitivity was 55%, specificity 83% and the positive and negative predictive values were 37% and 91%, respectively. The model was used to develop a free online tool (FoVOx). Interpretation We have developed a prediction score in a Swedish cohort of patients discharged from secure hospitals that can assist in clinical decision-making. Scalable predictive models for violence risk are possible in specific patient groups and can free up clinical time for treatment and management. Further evaluation in other countries is needed. Funding Wellcome Trust (202836/Z/16/Z) and the Swedish Research Council. The funding sources had no involvement in writing of the manuscript or decision to submit or in data collection, analysis or interpretation or any aspect pertinent to the study.
Identification of low risk of violent crime in severe mental illness with a clinical prediction tool (Oxford Mental Illness and Violence tool [OxMIV]): a derivation and validation study
Background: Current approaches to stratify patients with psychiatric disorders into groups on the basis of violence risk are limited by inconsistency, variable accuracy, and unscalability. To address the need for a scalable and valid tool to assess violence risk in patients with schizophrenia spectrum or bipolar disorder, we describe the derivation of a score based on routinely collected factors and present findings from external validation. Methods: On the basis of a national cohort of 75 158 Swedish individuals aged 15–65 years with a diagnosis of severe mental illness (schizophrenia spectrum or bipolar disorder) with 574 018 patient episodes between Jan 1, 2001, and Dec 31, 2008, we developed predictive models for violent offending (primary outcome) within 1 year of hospital discharge for inpatients or clinical contact with psychiatric services for outpatients (patient episode) through linkage of population-based registers. We developed a derivation model to determine the relative influence of prespecified criminal history and sociodemographic and clinical risk factors, which are mostly routinely collected, and then tested it in an external validation. We measured discrimination and calibration for prediction of violent offending at 1 year using specified risk cutoffs. Findings: Of the cohort of 75 158 patients with schizophrenia spectrum or bipolar disorder, we assigned 58 771 (78%) to the derivation sample and 16 387 (22%) to the validation sample. In the derivation sample, 830 (1%) individuals committed a violent offence within 12 months of their patient episode. We developed a 16-item model. The strongest predictors of violent offending within 12 months were conviction for previous violent crime (adjusted odds ratio 5·03 [95% CI 4·23–5·98]; p<0·0001), male sex (2·32 [1·91–2·81]; p<0·0001), and age (0·63 per 10 years of age [0·58–0·67]; p<0·0001). In external validation, the model showed good measures of discrimination (c-index 0·89 [0·85–0·93]) and calibration. For risk of violent offending at 1 year, with a 5% cutoff, sensitivity was 62% (95% CI 55–68) and specificity was 94% (93–94). The positive predictive value was 11% and the negative predictive value was more than 99%. We used the model to generate a simple web-based risk calculator (Oxford Mental Illness and Violence tool [OxMIV]). Interpretation: We have developed a prediction score in a national cohort of patients with schizophrenia spectrum or bipolar disorder, which can be used as an adjunct to decision making in clinical practice by identifying those who are at low risk of violent offending. The low positive predictive value suggests that further clinical assessment in individuals at high risk of violent offending is required to establish who might benefit from additional risk management. Further validation in other countries is needed. Funding: Wellcome Trust and Swedish Research Council.
Prevalence and decay of maternal pneumococcal and meningococcal antibodies: A meta-analysis of type-specific decay rates
Background At the time of an infant's initial vaccination at age ∼2 to 3 months, some infants already have maternal antibodies against vaccine antigens and these can suppress the immune response to vaccination. Modelling the effects of maternal antibody and the timing of infant doses on the antibody response to vaccination, requires estimates of the rate of maternal antibody decay. Decay rates are not well characterised in the medical literature. We investigated variation in the prevalence of maternal anti-capsular pneumococcal and meningococcal antibodies in infants in 14 countries, and estimated type-specific half-lives. Methods Individual participant serological data were obtained from clinical trials. Half-lives were estimated from antibody concentrations in infants who did not receive meningococcal or pneumococcal vaccines. Results The seroprevalence of maternal pneumococcal antibodies was highest for serotypes 14, and 19F (92% and 80% respectively) and lowest for serotypes 4 and 1 (30% and 34% respectively). Half-life estimates ranged from 38.7 days (95% CI 36.6–41.0) for serotype 6B, to 48.3 days (95% CI 46.7–50.2) for serotype 5. The overall half-life was 42.6 days (95% CI 41.5–43.7). Seroprevalence was highest in Mali, Nigeria, India, and the Philippines, (all >65%) and lowest in the Czech Republic and Finland (both <45%). In studies of meningococcal vaccines, seroprevalence was 13% for group C (half-life 39.8 days, 95% CI 33.4–49.4) and 43% for group A (half-life 43.1 days 95% CI 39.8–47.2). Conclusion Substantial proportions of infants in many countries have antibodies to vaccine serotypes of pneumococcus, however fewer infants have maternally acquired antibodies to groups A and C meningococcus. Passively-acquired antibodies to capsular polysaccharides decay with a half-life of approximately 6 weeks. These estimates are useful for modelling the impact of proposed vaccination programmes, and consideration of schedules with a delayed start.
Comparative efficacy of drugs for treating giardiasis: A systematic update of the literature and network meta-analysis of randomized clinical trials
Background: Giardiasis is the commonest intestinal protozoal infection worldwide. The current first-choice therapy is metronidazole. Recently, other drugs with potentially higher efficacy or with fewer and milder side effects have increased in popularity, but evidence is limited by a scarcity of randomized controlled trials (RCTs) comparing the many treatment options available. Network meta-analysis (NMA) is a useful tool to compare multiple treatments when there is limited or no direct evidence available. Objectives: To compare the efficacy and side effects of all available drugs for the treatment of giardiasis. Methods: We selected all RCTs included in systematic reviews and expert reviews of all treatments for giardiasis published until 2014, extended the systematic literature search until 2016, and identified new studies by scanning reference lists for relevant studies. We then conducted an NMA of all available treatments for giardiasis by comparing parasitological cure (efficacy) and side effects. Results: We identified 60 RCTs from 58 reports (46 from published systematic reviews, 8 from reference lists and 4 from the updated systematic search). Data from 6714 patients, 18 treatments and 42 treatment comparisons were available. Tinidazole was associated with higher parasitological cure than metronidazole [relative risk (RR) 1.23, 95% CI 1.12-1.35] and albendazole (RR 1.35, 95% CI 1.21-1.50). Taking into consideration clinical efficacy, side effects and amount of the evidence, tinidazole was found to be the most effective drug. Conclusions: We provide additional evidence that single-dose tinidazole is the best available treatment for giardiasis in symptomatic and asymptomatic children and adults.
Serotype-Specific Correlates of Protection for Pneumococcal Carriage: An Analysis of Immunity in 19 Countries
Background. Pneumococcal conjugate vaccines (PCVs) provide direct protection against disease in those vaccinated, and interrupt transmission through the prevention of nasopharyngeal (NP) carriage. Methods. We analyzed immunogenicity data from 5224 infants who received PCV in prime-boost schedules. We defned any increase in antibody between the 1-month postpriming visit and the booster dose as an indication of NP carriage ("seroincidence"). We calculated antibody concentrations using receiver operating characteristic curves, and used generalized additive models to compute their protective efcacy against seroincidence. To support seroincidence as a marker of carriage, we compared seroincidence in a randomized immunogenicity trial in Nepal with the serotype-specifc prevalence of carriage in the same community. Results. In Nepalese infants, seroincidence of carriage closely correlated with serotype-specifc carriage prevalence in the community. In the larger data set, antibody concentrations associated with seroincidence were lowest for serotypes 6B and 23F (0.50 μg/mL and 0.63 μg/mL, respectively), and highest for serotypes 19F and 14 (2.54 μg/mL and 2.48 μg/mL, respectively). Te protective efcacy of antibody at these levels was 62% and 74% for serotypes 6B and 23F, and 87% and 84% for serotypes 19F and 14. Protective correlates were on average 2.15 times higher in low/lower middle-income countries than in high/upper middle-income countries (geometric mean ratio, 2.15 [95% confdence interval, 1.46-3.17]; P =.0024). Conclusions. Antibody concentrations associated with protection vary between serotypes. Higher antibody concentrations are required for protection in low-income countries. Tese fndings are important for global vaccination policy, to interrupt transmission by protecting against carriage.
Interactive visualisation for interpreting diagnostic test accuracy study results
Information about the performance of diagnostic tests is typically presented in the form of measures of test accuracy such as sensitivity and specificity. These measures may be difficult to translate directly into decisions about patient treatment, for which information presented in the form of probabilities of disease after a positive or a negative test result may be more useful. These probabilities depend on the prevalence of the disease, which is likely to vary between populations. This article aims to clarify the relationship between pre-test (prevalence) and post-test probabilities of disease, and presents two free, online interactive tools to illustrate this relationship. These tools allow probabilities of disease to be compared with decision thresholds above and below which different treatment decisions may be indicated. They are intended to help those involved in communicating information about diagnostic test performance and are likely to be of benefit when teaching these concepts. A substantive example is presented using C reactive protein as a diagnostic marker for bacterial infection in the older adult population. The tools may also be useful for manufacturers of clinical tests in planning product development, for authors of test evaluation studies to improve reporting and for users of test evaluations to facilitate interpretation and application of the results.
Post-imaging colorectal cancer or interval cancer rates after CT colonography: a systematic review and meta-analysis
Background: CT colonography is highly sensitive for colorectal cancer, but interval or post-imaging colorectal cancer rates (diagnosis of cancer after initial negative CT colonography) are unknown, as are their underlying causes. We did a systematic review and meta-analysis of post-CT colonography and post-imaging colorectal cancer rates and causes to address this gap in understanding. Methods: We systematically searched MEDLINE, Embase, and the Cochrane Central Register of Controlled Trials. We included randomised, cohort, cross-sectional, or case-control studies published between Jan 1, 1994, and Feb 28, 2017, using CT colonography done according to international consensus standards with the aim of detecting cancer or polyps, and reporting post-imaging colorectal cancer rates or sufficient data to allow their calculation. We excluded studies in which all CT colonographies were done because of incomplete colonoscopy or if CT colonography was done with knowledge of colonoscopy findings. We contacted authors of component studies for additional data where necessary for retrospective CT colonography image review and causes for each post-imaging colorectal cancer. Two independent reviewers extracted data from the study reports. Our primary outcome was prevalence of post-imaging colorectal cancer 36 months after CT colonography. We used random-effects meta-analysis to estimate pooled post-imaging colorectal cancer rates, expressed using the total number of cancers and total number of CT colonographies as denominators, and per 1000 person-years. This study is registered with PROSPERO, number CRD42016042437. Findings: 2977 articles were screened and 12 studies were eligible for analysis. These studies reported data for 19 867 patients (aged 18–96 years; of 11 590 with sex data available, 6532 [56%] were female) between March, 2002, and May, 2015. At a mean of 34 months' follow-up (range 3–128·4 months), CT colonography detected 643 colorectal cancers. 29 post-imaging colorectal cancers were subsequently diagnosed. The pooled post-imaging colorectal cancer rate was 4·42 (95% CI 3·03–6·42) per 100 cancers detected, corresponding to 1·61 (1·11–2·33) post-imaging colorectal cancers per 1000 CT colonographies or 0·64 (0·44–0·92) post-imaging colorectal cancers per 1000 person-years. Heterogeneity was low (I2=0%). 17 (61%) of 28 post-imaging colorectal cancers were attributable to perceptual error and were visible in retrospect. Interpretation: CT colonography does not lead to an excess of post-test cancers relative to colonoscopy within 3–5 years, and the low 5-year post-imaging colorectal cancer rate confirms that the recommended screening interval of 5 years is safe. Since most post-imaging colorectal cancers arise from perceptual errors, radiologist training and quality assurance could help to reduce post-imaging colorectal cancer rates. Funding: St Mark's Hospital Foundation and the UK National Institute for Health Research via the UCL/UCLH Biomedical Research Centre.
Screening for Hypertension in the INpatient Environment(SHINE): A protocol for a prospective study of diagnostic accuracy among adult hospital patients
Introduction A significant percentage of patients admitted to hospital have undiagnosed hypertension. However, present hypertension guidelines in the UK, Europe and USA do not define a blood pressure threshold at which hospital inpatients should be considered at risk of hypertension, outside of the emergency setting. The objective of this study is to identify the optimal in-hospital mean blood pressure threshold, above which patients should receive postdischarge blood pressure assessment in the community. Methods and analysis Screening for Hypertension in the INpatient Environment is a prospective diagnostic accuracy study. Patients admitted to hospital whose mean average daytime blood pressure after 24 hours or longer meets the study eligibility threshold for mean daytime blood pressure (≥120/70 mm Hg) and who have no prior diagnosis of, or medication for hypertension will be eligible. At 8 weeks postdischarge, recruited participants will wear an ambulatory blood pressure monitor for 24 hours. Mean daytime ambulatory blood pressure will be calculated to assess for the presence or absence of hypertension. Diagnostic performance of in-hospital blood pressure will be assessed by constructing receiver operator characteristic curves from participants' in-hospital mean systolic and mean diastolic blood pressure (index test) versus diagnosis of hypertension determined by mean daytime ambulatory blood pressure (reference test). Ethics and dissemination Ethical approval has been provided by the National Health Service Health Research Authority South Central-Oxford B Research Ethics Committee (19/SC/0026). Findings will be disseminated through national and international conferences, peer-reviewed journals and social media.
Development of practical recommendations for diagnostic accuracy studies in low-prevalence situations
Objective: Low disease prevalence poses challenges for diagnostic accuracy studies because of the large sample sizes that are required to obtain sufficient precision. The aim is to collate and discuss designs of diagnostic accuracy studies suited for use in low-prevalence situations. Study Design and Setting: We conducted a literature search including backward citation tracking and expert consultation. Two reviewers independently selected studies on designs for estimating diagnostic accuracy in a low-prevalence situation. During a 1-day expert meeting, all designs were discussed and recommendations were formulated. Results: We identified six designs for diagnostic accuracy studies that are suitable in low-prevalence situations because they reduced the total sample size or the number of patients undergoing the index test or reference standard depending on which poses the highest burden. We described the advantages and limitations of these designs and evaluated efficiencies in sample sizes, risk of bias, and alignment with the clinical pathway for applicability in routine care. Conclusion: Choosing a study design for diagnostic accuracy studies in low-prevalence situations should depend on whether the aim is to limit the number of patients undergoing the index test or reference standard, and the risk of bias associated with a particular design type.
Prediction of violent reoffending in prisoners and individuals on probation: a Dutch validation study (OxRec)
Scalable and transparent methods for risk assessment are increasingly required in criminal justice to inform decisions about sentencing, release, parole, and probation. However, few such approaches exist and their validation in external settings is typically lacking. A total national sample of all offenders (9072 released from prisoners and 6329 individuals on probation) from 2011–2012 in the Netherlands were followed up for violent and any reoffending over 2 years. The sample was mostly male (n = 574 [6%] were female prisoners and n = 784 [12%] were female probationers), and median ages were 30 in the prison sample and 34 in those on probation. Predictors for a scalable risk assessment tool (OxRec) were extracted from a routinely collected dataset used by criminal justice agencies, and outcomes from official criminal registers. OxRec’s predictive performance in terms of discrimination and calibration was tested. Reoffending rates in the Dutch prisoner cohort were 16% for 2-year violent reoffending and 44% for 2-year any reoffending, with lower rates in the probation sample. Discrimination as measured by the c-index was moderate, at 0.68 (95% CI: 0.66–0.70) for 2-year violent reoffending in prisoners and between 0.65 and 0.68 for other outcomes and the probation sample. The model required recalibration, after which calibration performance was adequate (e.g. calibration in the large was 1.0 for all scenarios). A recalibrated model for OxRec can be used in the Netherlands for individuals released from prison and individuals on probation to stratify their risk of future violent and any reoffending. The approach that we outline can be considered for external validations of criminal justice and clinical risk models.
Diagnostic accuracy of molecular methods for detecting markers of antimalarial drug resistance in clinical samples of Plasmodium falciparum: Protocol for an update to a systematic review and meta-analysis
Background: Each year, infection with Plasmodium causes millions of clinical cases of malaria and hundreds of thousands of deaths. Resistance to different antimalarial medications continues to develop and spread, threatening effective prophylaxis and treatment. Surveillance of resistance is required to inform health policy and preserve effective antimalarial drugs; molecular methods can be used to surveil likely parasite resistances. However, there is no consensus on the most accurate molecular methods, and large variation exists in practice. The objective of this update to this systematic review is to improve and update identification of the sensitivity and specificity of each molecular method for detecting selected antimalarial drug resistance markers. Methods: We will include diagnostic accuracy studies that compare at least two of any molecular methods to examine blood samples from patients diagnosed with, or suspected of having malaria, to detect at least one selected marker of antimalarial drug resistance. We will search PubMed, EMBASE, BIOSIS, and Web of Science from 2000 to present. Two reviewers will independently screen all results, extract data, consider applicability, and evaluate the methodological quality of included studies using QUADAS-2. We will carry out a meta-analysis and use statistical methods to compare results from homogenous studies. We will use narrative to synthesise and compare results of heterogeneous studies. Discussion: This review will help to identify sub-optimal molecular methods for antimalarial marker detection which may be discontinued and identify more sensitive and specific methods which may be adopted. More sensitive and specific detection of drug resistance can be used to improve the breadth and accuracy of surveillance. This would enable the identification of previously undiscovered areas of antimalarial resistances and susceptibilities, improve the precision of estimates of the prevalence of resistances, and improve our ability to detect smaller changes in these patterns. Higher-quality evidence generated by more accurate and detailed surveillance can be used to inform guidelines on the use of antimalarial drugs, leading to better outcomes for more patients. Systematic review registration: This systematic review protocol was registered with PROSPERO on 22 November 2017 (registration number CRD42017082101).
The Impact of Point-of-Care Blood C-Reactive Protein Testing on Prescribing Antibiotics in Out-of-Hours Primary Care: A Mixed Methods Evaluation
Improving prescribing antibiotics appropriately for respiratory infections in primary care is an antimicrobial stewardship priority. There is limited evidence to support interventions to reduce prescribing antibiotics in out-of-hours (OOH) primary care. Herein, we report a service innovation where point-of-care C-Reactive Protein (CRP) machines were introduced to three out-of-hours primary care clinical bases in England from August 2018–December 2019, which were compared with four control bases that did not have point-of-care CRP testing. We undertook a mixed-method evaluation, including a comparative interrupted time series analysis to compare monthly antibiotic prescription rates between bases with CRP machines and those without, an analysis of the number of and reasons for the tests performed, and qualitative interviews with clinicians. Antibiotic prescription rates declined during follow-up, but with no clear difference between the two groups of out-of-hours practices. A single base contributed 217 of the 248 CRP tests performed. Clinicians reported that the tests supported decision making and communication about not prescribing antibiotics, where having ‘objective’ numbers were helpful in navigating non-prescribing decisions and highlighted the challenges of training a fluctuant staff group and practical concerns about using the CRP machine. Service improvements to reduce prescribing antibiotics in out-of-hours primary care need to be developed with an understanding of the needs and context of this service.
Behavioural programmes for cigarette smoking cessation: investigating interactions between behavioural, motivational and delivery components in a systematic review and component network meta-analysis
Aims: To investigate the comparative and combined effectiveness of four types of components of behavioural interventions for cigarette smoking cessation: behavioural (e.g. counselling), motivational (e.g. focus on reasons to quit), delivery mode (e.g. phone) and provider (e.g. nurse). Design: Systematic review and component network meta-analysis of randomised controlled trials identified from Cochrane reviews. Interventions included behavioural interventions for smoking cessation (including all non-pharmacological interventions, e.g. counselling, exercise, hypnotherapy, self-help materials), compared with another behavioural intervention or no support. Building on a 2021 review (CD013229), we conducted three analyses, investigating: comparative effectiveness of the components, whether models that allowed interactions between components gave different results to models assuming additivity, and predicted effect estimates for combined effects of components that had showed promise but where there were few trials. Setting: Community and health-care settings. Participants: Adults who smoke tobacco. Measurements: Smoking cessation at ≥6 months, preferring sustained, biochemically validated outcomes where available. Findings: Three hundred and twelve trials (250 563 participants) were included. Fifty were at high risk of bias using Cochrane risk of bias tool, V1 (ROB1); excluding these studies did not change findings. Head-to-head comparisons of components suggested that support via text message (SMS) compared with telephone (OR 1.48, 95% CrI 1.13–1.94) or print materials (OR 1.44, 95% CrI 1.14–1.83) was more effective, and individual delivery was less effective than delivery as part of a group (OR 0.78, 95% CrI 0.64–0.95). There was no conclusive evidence of synergistic or antagonistic interactions when combining components that were commonly used together. Adding multiple components that are commonly used in behavioural counselling suggested clinically relevant and statistically conclusive evidence of benefit. Components with the largest effects that could be combined, but rarely have been, were estimated to increase the odds of quitting between two and threefold. For example, financial incentives delivered via SMS, with tailoring and a focus on how to quit, had an estimated OR of 2.94 (95% CrI 1.91–4.52). Conclusions: Among the components of behavioural support for smoking cessation, behavioural counselling and guaranteed financial incentives are associated with the greatest success. Incorporating additional components associated with effectiveness may further increase benefit, with delivery via text message showing particular promise.
Cancer statistics A survival guide
Jason Oke and Tom Fanshawe expose four simple biases that can change our understanding of cancer survival rates and skew comparisons made between countries.
Electronic cigarettes for smoking cessation
Background: Electronic cigarettes (ECs) are handheld electronic vaping devices which produce an aerosol formed by heating an e-liquid. People who smoke report using ECs to stop or reduce smoking, but some organisations, advocacy groups and policymakers have discouraged this, citing lack of evidence of efficacy and safety. People who smoke, healthcare providers and regulators want to know if ECs can help people quit and if they are safe to use for this purpose. This review is an update of a review first published in 2014. Objectives: To evaluate the effect and safety of using electronic cigarettes (ECs) to help people who smoke achieve long-term smoking abstinence. Search methods: We searched the Cochrane Tobacco Addiction Group's Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, and PsycINFO for relevant records to January 2020, together with reference-checking and contact with study authors. Selection criteria: We included randomized controlled trials (RCTs) and randomized cross-over trials in which people who smoke were randomized to an EC or control condition. We also included uncontrolled intervention studies in which all participants received an EC intervention. To be included, studies had to report abstinence from cigarettes at six months or longer and/or data on adverse events (AEs) or other markers of safety at one week or longer. Data collection and analysis: We followed standard Cochrane methods for screening and data extraction. Our primary outcome measures were abstinence from smoking after at least six months follow-up, AEs, and serious adverse events (SAEs). Secondary outcomes included changes in carbon monoxide, blood pressure, heart rate, blood oxygen saturation, lung function, and levels of known carcinogens/toxicants. We used a fixed-effect Mantel-Haenszel model to calculate the risk ratio (RR) with a 95% confidence interval (CI) for dichotomous outcomes. For continuous outcomes, we calculated mean differences. Where appropriate, we pooled data from these studies in meta-analyses. Main results: We include 50 completed studies, representing 12,430 participants, of which 26 are RCTs. Thirty-five of the 50 included studies are new to this review update. Of the included studies, we rated four (all which contribute to our main comparisons) at low risk of bias overall, 37 at high risk overall (including the 24 non-randomized studies), and the remainder at unclear risk. There was moderate-certainty evidence, limited by imprecision, that quit rates were higher in people randomized to nicotine EC than in those randomized to nicotine replacement therapy (NRT) (risk ratio (RR) 1.69, 95% confidence interval (CI) 1.25 to 2.27; I2 = 0%; 3 studies, 1498 participants). In absolute terms, this might translate to an additional four successful quitters per 100 (95% CI 2 to 8). There was low-certainty evidence (limited by very serious imprecision) of no difference in the rate of adverse events (AEs) (RR 0.98, 95% CI 0.80 to 1.19; I2 = 0%; 2 studies, 485 participants). SAEs occurred rarely, with no evidence that their frequency differed between nicotine EC and NRT, but very serious imprecision led to low certainty in this finding (RR 1.37, 95% CI 0.77 to 2.41: I2 = n/a; 2 studies, 727 participants). There was moderate-certainty evidence, again limited by imprecision, that quit rates were higher in people randomized to nicotine EC than to non-nicotine EC (RR 1.71, 95% CI 1.00 to 2.92; I2 = 0%; 3 studies, 802 participants). In absolute terms, this might again lead to an additional four successful quitters per 100 (95% CI 0 to 12). These trials used EC with relatively low nicotine delivery. There was low-certainty evidence, limited by very serious imprecision, that there was no difference in the rate of AEs between these groups (RR 1.00, 95% CI 0.73 to 1.36; I2 = 0%; 2 studies, 346 participants). There was insufficient evidence to determine whether rates of SAEs differed between groups, due to very serious imprecision (RR 0.25, 95% CI 0.03 to 2.19; I2 = n/a; 4 studies, 494 participants). Compared to behavioural support only/no support, quit rates were higher for participants randomized to nicotine EC (RR 2.50, 95% CI 1.24 to 5.04; I2 = 0%; 4 studies, 2312 participants). In absolute terms this represents an increase of six per 100 (95% CI 1 to 14). However, this finding was very low-certainty, due to issues with imprecision and risk of bias. There was no evidence that the rate of SAEs varied, but some evidence that non-serious AEs were more common in people randomized to nicotine EC (AEs: RR 1.17, 95% CI 1.04 to 1.31; I2 = 28%; 3 studies, 516 participants; SAEs: RR 1.33, 95% CI 0.25 to 6.96; I2 = 17%; 5 studies, 842 participants). Data from non-randomized studies were consistent with RCT data. The most commonly reported AEs were throat/mouth irritation, headache, cough, and nausea, which tended to dissipate over time with continued use. Very few studies reported data on other outcomes or comparisons and hence evidence for these is limited, with confidence intervals often encompassing clinically significant harm and benefit. Authors' conclusions: There is moderate-certainty evidence that ECs with nicotine increase quit rates compared to ECs without nicotine and compared to NRT. Evidence comparing nicotine EC with usual care/no treatment also suggests benefit, but is less certain. More studies are needed to confirm the degree of effect, particularly when using modern EC products. Confidence intervals were wide for data on AEs, SAEs and other safety markers. Overall incidence of SAEs was low across all study arms. We did not detect any clear evidence of harm from nicotine EC, but longest follow-up was two years and the overall number of studies was small. The main limitation of the evidence base remains imprecision due to the small number of RCTs, often with low event rates. Further RCTs are underway. To ensure the review continues to provide up-to-date information for decision-makers, this review is now a living systematic review. We will run searches monthly from December 2020, with the review updated as relevant new evidence becomes available. Please refer to the Cochrane Database of Systematic Reviews for the review's current status.
The diagnostic performance of current tumour markers in surveillance for recurrent testicular cancer: A diagnostic test accuracy systematic review
In this diagnostic test accuracy systematic review we summarise the evidence on the diagnostic accuracy of blood α-fetoprotein (AFP), human chorionic gonadotropin (HCG) and lactate dehydrogenase (LDH) in surveillance for testicular cancer recurrence in adults. We searched four electronic databases for studies that reported the diagnostic accuracy of HCG, AFP, and/or LDH in sufficient detail for sensitivity and specificity to be calculated by extracting a 2 × 2 table comparing biomarker positivity with testicular cancer recurrence. Screening, data extraction and QUADAS-2 quality assessment were completed by two independent reviewers. From 2406 studies, nine met our inclusion criteria. Eight reported data at the per-patient level. Sample sizes were small (range 5 to 449 patients) and clinical heterogeneity precluded meta-analysis. In most studies the specificity for recurrence with AFP and HCG was high (90–100%) but sensitivity was often relatively low, suggesting that many recurrences would not be detected by tumour markers alone. The diagnostic performance of LDH appears poorer. Studies were methodologically weak, with probable selection, incorporation and partial verification bias, and many studies were excluded for not reporting on recurrence-free patients. Limitations including small sample sizes, high heterogeneity, and inconsistent and incomplete reporting mean these results must be interpreted with caution. Despite inclusion of biomarkers in international surveillance guidance, there remains a lack of high quality evidence about their accuracy, optimal thresholds, and the most effective surveillance strategy in relation to contemporary investigative modalities. Higher quality research using data from modern-day follow-up cohorts is necessary to identify opportunities to reduce unnecessary testing.