Search results
Found 18254 matches for
We lead multidisciplinary applied research and training to rethink the way health care is delivered in general practice and across the community.
Effects of changing practitioner empathy and patient expectations in healthcare consultations
This is the protocol for a review and there is no abstract. The objectives are as follows: The main aim of this review will be to assess the effects of changing practitioner empathy or patient expectations for all conditions. The main objective is to conduct a systematic review of randomised trials where the intervention involves manipulating either (a) practitioner empathy or (b) patient expectations, or (c) both.
Relationship between microbiology of throat swab and clinical course among primary care patients with acute cough: A prospective cohort study
Background: Acute lower respiratory tract infections (ALRTIs) account for most antibiotics prescribed in primary care despite lack of efficacy, partly due to clinician uncertainty about aetiology and patient concerns about illness course. Nucleic acid amplification tests could assist antibiotic targeting. Methods: In this prospective cohort study, 645 patients presenting to primary care with acute cough and suspected ALRTI, provided throat swabs at baseline. These were tested for respiratory pathogens by real-time polymerase chain reaction and classified as having a respiratory virus, bacteria, both or neither. Three hundred fifty-four participants scored the symptoms severity daily for 1 week in a diary (0 = absent to 4 = severe problem). Results: Organisms were identified in 346/645 (53.6%) participants. There were differences in the prevalence of seven symptoms between the organism groups at baseline. Those with a virus alone, and those with both virus and bacteria, had higher average severity scores of all symptoms combined during the week of follow-up than those in whom no organisms were detected [adjusted mean differences 0.204 (95% confidence interval 0.010 to 0.398) and 0.348 (0.098 to 0.598), respectively]. There were no differences in the duration of symptoms rated as moderate or severe between organism groups. Conclusions: Differences in presenting symptoms and symptoms severity can be identified between patients with viruses and bacteria identified on throat swabs. The magnitude of these differences is unlikely to influence management. Most patients had mild symptoms at 7 days regardless of aetiology, which could inform patients about likely symptom duration.
C-reactive protein and neutrophil count laboratory test requests from primary care: What is the demand and would substitution by point-of-care technology be viable?
Aims C-reactive protein (CRP) and neutrophil count (NC) are important diagnostic indicators of inflammation. Point-of-care (POC) technologies for these markers are available but rarely used in community settings in the UK. To inform the potential for POC tests, it is necessary to understand the demand for testing. We aimed to describe the frequency of CRP and NC test requests from primary care to central laboratory services, describe variability between practices and assess the relationship between the tests. Methods We described the number of patients with either or both laboratory tests, and the volume of testing per individual and per practice, in a retrospective cohort of all adults in general practices in Oxfordshire, 2014-2016. Results 372 017 CRP and 776 581 NC tests in 160 883 and 275 093 patients, respectively, were requested from 69 practices. CRP was tested mainly in combination with NC, while the latter was more often tested alone. The median (IQR) of CRP and NC tests/person tested was 1 (1-2) and 2 (1-3), respectively. The median (IQR) tests/practice/week was 36 (22-52) and 72 (50-108), and per 1000 persons registered/practice/week was 4 (3-5) and 8 (7-9), respectively. The median (IQR) CRP and NC concentrations were 2.7 (0.9-7.9) mg/dL and 4.1 (3.1-5.5)×10 9 /L, respectively. Conclusions The high demand for CRP and NC testing in the community, and the range of results falling within the reportable range for current POC technologies highlight the opportunity for laboratory testing to be supplemented by POC testing in general practice.
Use of Decision Modelling in Economic Evaluations of Diagnostic Tests: An Appraisal and Review of Health Technology Assessments in the UK
Diagnostic tests play an important role in the clinical decision-making process by providing information that enables patients to be identified and stratified to the most appropriate treatment and management strategies. Decision analytic modelling facilitates the synthesis of evidence from multiple sources to evaluate the cost effectiveness of diagnostic tests. This study critically reviews the methods used to model the cost effectiveness of diagnostic tests in UK National Institute for Health Research (NIHR) Health Technology Assessment (HTA) reports. UK NIHR HTA reports published between 2009 and 2018 were screened to identify those reporting an economic evaluation of a diagnostic test using decision analytic modelling. Existing decision modelling checklists were identified in the literature and a modified checklist tailored to diagnostic economic evaluations was developed, piloted and used to assess the diagnostic models in HTA reports. Of 728 HTA reports published during the study period, 55 met the inclusion criteria. The majority of models performed well with a clearly defined decision problem and analytical perspective (89% of HTAs met the criterion). The model structure usually reflected the care pathway and progression of the health condition. However, there are areas requiring improvement. These are predominantly systematic identification of treatment effects (20% met), poor selection of comparators (50% met) and assumed independence of tests used in sequence (32% took correlation between sequential tests into consideration). The complexity and constraints of performing decision analysis of diagnostic tests on costs and health outcomes makes it particularly challenging and, as a result, quality issues remain. This review provides a comprehensive assessment of modelling in HTA reports, highlights problems and gives recommendations for future diagnostic modelling practice.
Additional behavioural support as an adjunct to pharmacotherapy for smoking cessation
Background Pharmacotherapies for smoking cessation increase the likelihood of achieving abstinence in a quit attempt. It is plausible that providing support, or, if support is offered, offering more intensive support or support including particular components may increase abstinence further. Objectives To evaluate the effect of adding or increasing the intensity of behavioural support for people using smoking cessation medications, and to assess whether there are different effects depending on the type of pharmacotherapy, or the amount of support in each condition. We also looked at studies which directly compare behavioural interventions matched for contact time, where pharmacotherapy is provided to both groups (e.g. tests of different components or approaches to behavioural support as an adjunct to pharmacotherapy). Search methods We searched theCochraneTobaccoAddictionGroup SpecialisedRegister, clinicaltrials.gov, and the ICTRP in June 2018 for recordswith any mention of pharmacotherapy, including any type of nicotine replacement therapy (NRT), bupropion, nortriptyline or varenicline, that evaluated the addition of personal support or compared two or more intensities of behavioural support. Selection criteria Randomised or quasi-randomised controlled trials in which all participants received pharmacotherapy for smoking cessation and conditions differed by the amount or type of behavioural support. The intervention condition had to involve person-to-person contact (defined as face-to-face or telephone). The control condition could receive less intensive personal contact, a different type of personal contact, written information, or no behavioural support at all.We excluded trials recruiting only pregnant women and trials which did not set out to assess smoking cessation at six months or longer. Data collection and analysis For this update, screening and data extraction followed standard Cochrane methods. The main outcome measure was abstinence from smoking after at least six months of follow-up. We used the most rigorous definition of abstinence for each trial, and biochemicallyvalidated rates, if available. We calculated the risk ratio (RR) and 95% confidence interval (CI) for each study. Where appropriate, we performed meta-analysis using a random-effects model.Main results Eighty-three studies, 36 of which were new to this update, met the inclusion criteria, representing 29,536 participants. Overall, we judged 16 studies to be at low risk of bias and 21 studies to be at high risk of bias. All other studies were judged to be at unclear risk of bias. Results were not sensitive to the exclusion of studies at high risk of bias. We pooled all studies comparing more versus less support in the main analysis. Findings demonstrated a benefit of behavioural support in addition to pharmacotherapy. When all studies of additional behavioural therapy were pooled, there was evidence of a statistically significant benefit from additional support (RR 1.15, 95% CI 1.08 to 1.22, I� = 8%, 65 studies, n = 23,331) for abstinence at longest follow-up, and this effect was not different when we compared subgroups by type of pharmacotherapy or intensity of contact. This effect was similar in the subgroup of eight studies in which the control group received no behavioural support (RR 1.20, 95% CI 1.02 to 1.43, I2 = 20%, n = 4,018). Seventeen studies compared interventions matched for contact time but that differed in terms of the behavioural components or approaches employed. Of the 15 comparisons, all had small numbers of participants and events. Only one detected a statistically significant effect, favouring a health education approach (which the authors described as standard counselling containing information and advice) over motivational interviewing approach (RR 0.56, 95% CI 0.33 to 0.94, n = 378). Authors' conclusions There is high-certainty evidence that providing behavioural support in person or via telephone for people using pharmacotherapy to stop smoking increases quit rates. Increasing the amount of behavioural support is likely to increase the chance of success by about 10% to 20%, based on a pooled estimate from 65 trials. Subgroup analysis suggests that the incremental benefit from more support is similar over a range of levels of baseline support.More research is needed to assess the effectiveness of specific components that comprise behavioural support.
Increasing navigation speed at endoluminal CT colonography reduces colonic visualization and polyp identification
Purpose: To investigate the effect of increasing navigation speed on the visual search and decision making during polyp identification for computed tomography (CT) colonography Materials and Methods: Institutional review board permission was obtained to use deidentified CT colonography data for this prospective reader study. After obtaining informed consent from the readers, 12 CT colonography fly-through examinations that depicted eight polyps were presented at four different fixed navigation speeds to 23 radiologists. Speeds ranged from 1 cm/sec to 4.5 cm/sec. Gaze position was tracked by using an infrared eye tracker, and readers indicated that they saw a polyp by clicking a mouse. Patterns of searching and decision making by speed were investigated graphically and by multilevel modeling. Results: Readers identified polyps correctly in 56 of 77 (72.7%) of viewings at the slowest speed but in only 137 of 225 (60.9%) of viewings at the fastest speed (P = .004). They also identified fewer false-positive features at faster speeds (42 of 115; 36.5%) of videos at slowest speed, 89 of 345 (25.8%) at fastest, P = .02). Gaze location was highly concentrated toward the central quarter of the screen area at faster speeds (mean gaze points at slowest speed vs fastest speed, 86% vs 97%, respectively). Conclusion: Faster navigation speed at endoluminal CT colonography led to progressive restriction of visual search patterns. Greater speed also reduced both true-positive and falsepositive colorectal polyp identification.
A large-scale assessment of temporal trends in meta-analyses using systematic review reports from the Cochrane Library
Introduction: Previous studies suggest that many systematic reviews contain meta-analyses that display temporal trends, such as the first study's result being more extreme than later studies' or a drift in the pooled estimate. We assessed the extent and characteristics of temporal trends using all Cochrane intervention reports published 2008-2012. Methods: We selected the largest meta-analysis within each report and analysed trends using methods including a Z-test (first versus subsequent estimates); generalised least squares; and cumulative sum charts. Predictors considered include meta-analysis size and review group. Results: Of 1288 meta-analyses containing at least 4 studies, the point estimate from the first study was more extreme and in the same direction as the pooled estimate in 738 (57%), with a statistically significant difference (first versus subsequent) in 165 (13%). Generalised least squares indicated trends in 717 (56%); 18% of fixed effects analyses had at least one violation of cumulative sum limits. For some methods, meta-analysis size was associated with temporal patterns and use of a random effects model, but there was no consistent association with review group. Conclusions: All results suggest that more meta-analyses demonstrate temporal patterns than would be expected by chance. Hence, assuming the standard meta-analysis model without temporal trend is sometimes inappropriate. Factors associated with trends are likely to be context specific.
Positive messages may reduce patient pain: A meta-analysis
Introduction Current treatments for pain have limited benefits and worrying side effects. Some studies suggest that pain is reduced when clinicians deliver positive messages. However, the effects of positive messages are heterogeneous and have not been subject to meta-analysis. We aimed to estimate the efficacy of positive messages for pain reduction. Methods We included randomized trials of the effects of positive messages in a subset of the studies included in a recent systematic review of context factors for treating pain. Several electronic databases were searched. Reference lists of relevant studies were also searched. Two authors independently undertook study selection, data extraction, risk of bias assessment, and analyses. Our primary outcome measures were differences in patient- or observer-reported pain between groups who were given positive messages and those who were not. Results Of the 16 randomized trials (1703 patients) that met the inclusion criteria, 12 trials had sufficient data for meta-analysis. The pooled standardized effect size was −0.31 (95% confidence interval [CI] −0.61 to −0.01, p = 0.04, I2 = 82%). The effect size remained positive but not statistically significant after we excluded studies considered to have a high risk of bias (standard effect size −0.17, 95% CI −0.54 to 0.19, P = 0.36, I2 = 84%). Conclusion Care of patients with chronic or acute pain may be enhanced when clinicians deliver positive messages about possible clinical outcomes. However, we have identified several limitations of the present study that suggest caution when interpreting the results. We recommend further high-quality studies to confirm (or falsify) our result.
Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: A methodological review of health technology assessments
Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
The influence of maternally derived antibody and infant age at vaccination on infant vaccine responses: An individual participant meta-analysis
IMPORTANCE The design of infant immunization schedules requires an understanding of the factors that determine the immune response to each vaccine antigen. DATA SOURCES Deidentified individual participant data from GlaxoSmithKline clinical trials were obtained through Clinical Study Data Request. The data were requested on January 2, 2015, and final data were received on April 11, 2016. STUDY SELECTION Immunogenicity trials of licensed or unlicensed vaccines administered to infants were included if antibody concentrations in infants were measured prior to the first dose of vaccine. DATA EXTRACTION AND SYNTHESIS The databasewas examined; studies that appeared to have appropriate data were reviewed. MAIN OUTCOMES AND MEASURES Antigen-specific antibody concentration measured 1 month after priming vaccine doses, before booster vaccination, and 1 month after booster vaccine doses. RESULTS A total of 7630 infants from 32 studies in 17 countries were included. Mean (SD) age at baseline was 9.0 (2.3) weeks; 3906 (51.2%) were boys. Preexisting maternal antibody inhibited infant antibody responses to priming doses for 20 of 21 antigens. The largest effects were observed for inactivated polio vaccine, where 2-fold higher maternal antibody concentrations resulted in 20%to 28%lower postvaccination antibody concentration (geometric mean ratios [GMRs], type 1: 0.80; 95%CI, 0.78-0.83; type 2: 0.72; 95%CI, 0.69-0.74; type 3: 0.78; 95%CI, 0.75-0.82). For acellular pertussis antigens, 2-fold higher maternal antibody was associated with 11%lower postvaccination antibody for pertussis toxoid (GMR, 0.89; 95%CI, 0.87-0.90) and filamentous hemagglutinin (GMR, 0.89; 95%CI, 0.88-0.90) and 22%lower pertactin antibody (GMR, 0.78; 95%CI, 0.77-0.80). For tetanus and diphtheria, these estimates were 13%(GMR, 0.87; 95%CI, 0.86-0.88) and 24%(GMR, 0.76; 95%CI, 0.74-0.77), respectively. The influence of maternal antibody was still evident in reduced responses to booster doses of acellular pertussis, inactivated polio, and diphtheria vaccines at 12 to 24 months of age. Children who were older when first immunized had higher antibody responses to priming doses for 18 of 21 antigens, after adjusting for the effect of maternal antibody concentrations. The largest effect was seen for polyribosylribitol phosphate antibody, where responses were 71%higher per month (GMR, 1.71; 95%CI, 1.52-1.92). CONCLUSIONS AND RELEVANCE Maternal antibody concentrations and infant age at first vaccination both influence infant vaccine responses. These effects are seen for almost all vaccines contained in global immunization programs and influence immune response for some vaccines even at the age of 24 months. These data highlight the potential for maternal immunization strategies to influence established infant programs.
Prediction of violent crime on discharge from secure psychiatric hospitals: A clinical prediction rule (FoVOx)
Background Current approaches to assess violence risk in secure hospitals are resource intensive, limited by accuracy and authorship bias and may have reached a performance ceiling. This study seeks to develop scalable predictive models for violent offending following discharge from secure psychiatric hospitals. Methods We identified all patients discharged from secure hospitals in Sweden between January 1, 1992 and December 31, 2013. Using multiple Cox regression, pre-specified criminal, sociodemographic, and clinical risk factors were included in a model that was tested for discrimination and calibration in the prediction of violent crime at 12 and 24 months post-discharge. Risk cut-offs were pre-specified at 5% (low vs. medium) and 20% (medium vs. high). Results We identified 2248 patients with 2933 discharges into community settings. We developed a 12-item model with good measures of calibration and discrimination (area under the curve = 0.77 at 12 and 24 months). At 24 months post-discharge, using the 5% cut-off, sensitivity was 96% and specificity was 21%. Positive and negative predictive values were 19% and 97%, respectively. Using the 20% cut-off, sensitivity was 55%, specificity 83% and the positive and negative predictive values were 37% and 91%, respectively. The model was used to develop a free online tool (FoVOx). Interpretation We have developed a prediction score in a Swedish cohort of patients discharged from secure hospitals that can assist in clinical decision-making. Scalable predictive models for violence risk are possible in specific patient groups and can free up clinical time for treatment and management. Further evaluation in other countries is needed. Funding Wellcome Trust (202836/Z/16/Z) and the Swedish Research Council. The funding sources had no involvement in writing of the manuscript or decision to submit or in data collection, analysis or interpretation or any aspect pertinent to the study.
Identification of low risk of violent crime in severe mental illness with a clinical prediction tool (Oxford Mental Illness and Violence tool [OxMIV]): a derivation and validation study
Background: Current approaches to stratify patients with psychiatric disorders into groups on the basis of violence risk are limited by inconsistency, variable accuracy, and unscalability. To address the need for a scalable and valid tool to assess violence risk in patients with schizophrenia spectrum or bipolar disorder, we describe the derivation of a score based on routinely collected factors and present findings from external validation. Methods: On the basis of a national cohort of 75 158 Swedish individuals aged 15–65 years with a diagnosis of severe mental illness (schizophrenia spectrum or bipolar disorder) with 574 018 patient episodes between Jan 1, 2001, and Dec 31, 2008, we developed predictive models for violent offending (primary outcome) within 1 year of hospital discharge for inpatients or clinical contact with psychiatric services for outpatients (patient episode) through linkage of population-based registers. We developed a derivation model to determine the relative influence of prespecified criminal history and sociodemographic and clinical risk factors, which are mostly routinely collected, and then tested it in an external validation. We measured discrimination and calibration for prediction of violent offending at 1 year using specified risk cutoffs. Findings: Of the cohort of 75 158 patients with schizophrenia spectrum or bipolar disorder, we assigned 58 771 (78%) to the derivation sample and 16 387 (22%) to the validation sample. In the derivation sample, 830 (1%) individuals committed a violent offence within 12 months of their patient episode. We developed a 16-item model. The strongest predictors of violent offending within 12 months were conviction for previous violent crime (adjusted odds ratio 5·03 [95% CI 4·23–5·98]; p<0·0001), male sex (2·32 [1·91–2·81]; p<0·0001), and age (0·63 per 10 years of age [0·58–0·67]; p<0·0001). In external validation, the model showed good measures of discrimination (c-index 0·89 [0·85–0·93]) and calibration. For risk of violent offending at 1 year, with a 5% cutoff, sensitivity was 62% (95% CI 55–68) and specificity was 94% (93–94). The positive predictive value was 11% and the negative predictive value was more than 99%. We used the model to generate a simple web-based risk calculator (Oxford Mental Illness and Violence tool [OxMIV]). Interpretation: We have developed a prediction score in a national cohort of patients with schizophrenia spectrum or bipolar disorder, which can be used as an adjunct to decision making in clinical practice by identifying those who are at low risk of violent offending. The low positive predictive value suggests that further clinical assessment in individuals at high risk of violent offending is required to establish who might benefit from additional risk management. Further validation in other countries is needed. Funding: Wellcome Trust and Swedish Research Council.
Prevalence and decay of maternal pneumococcal and meningococcal antibodies: A meta-analysis of type-specific decay rates
Background At the time of an infant's initial vaccination at age ∼2 to 3 months, some infants already have maternal antibodies against vaccine antigens and these can suppress the immune response to vaccination. Modelling the effects of maternal antibody and the timing of infant doses on the antibody response to vaccination, requires estimates of the rate of maternal antibody decay. Decay rates are not well characterised in the medical literature. We investigated variation in the prevalence of maternal anti-capsular pneumococcal and meningococcal antibodies in infants in 14 countries, and estimated type-specific half-lives. Methods Individual participant serological data were obtained from clinical trials. Half-lives were estimated from antibody concentrations in infants who did not receive meningococcal or pneumococcal vaccines. Results The seroprevalence of maternal pneumococcal antibodies was highest for serotypes 14, and 19F (92% and 80% respectively) and lowest for serotypes 4 and 1 (30% and 34% respectively). Half-life estimates ranged from 38.7 days (95% CI 36.6–41.0) for serotype 6B, to 48.3 days (95% CI 46.7–50.2) for serotype 5. The overall half-life was 42.6 days (95% CI 41.5–43.7). Seroprevalence was highest in Mali, Nigeria, India, and the Philippines, (all >65%) and lowest in the Czech Republic and Finland (both <45%). In studies of meningococcal vaccines, seroprevalence was 13% for group C (half-life 39.8 days, 95% CI 33.4–49.4) and 43% for group A (half-life 43.1 days 95% CI 39.8–47.2). Conclusion Substantial proportions of infants in many countries have antibodies to vaccine serotypes of pneumococcus, however fewer infants have maternally acquired antibodies to groups A and C meningococcus. Passively-acquired antibodies to capsular polysaccharides decay with a half-life of approximately 6 weeks. These estimates are useful for modelling the impact of proposed vaccination programmes, and consideration of schedules with a delayed start.
Comparative efficacy of drugs for treating giardiasis: A systematic update of the literature and network meta-analysis of randomized clinical trials
Background: Giardiasis is the commonest intestinal protozoal infection worldwide. The current first-choice therapy is metronidazole. Recently, other drugs with potentially higher efficacy or with fewer and milder side effects have increased in popularity, but evidence is limited by a scarcity of randomized controlled trials (RCTs) comparing the many treatment options available. Network meta-analysis (NMA) is a useful tool to compare multiple treatments when there is limited or no direct evidence available. Objectives: To compare the efficacy and side effects of all available drugs for the treatment of giardiasis. Methods: We selected all RCTs included in systematic reviews and expert reviews of all treatments for giardiasis published until 2014, extended the systematic literature search until 2016, and identified new studies by scanning reference lists for relevant studies. We then conducted an NMA of all available treatments for giardiasis by comparing parasitological cure (efficacy) and side effects. Results: We identified 60 RCTs from 58 reports (46 from published systematic reviews, 8 from reference lists and 4 from the updated systematic search). Data from 6714 patients, 18 treatments and 42 treatment comparisons were available. Tinidazole was associated with higher parasitological cure than metronidazole [relative risk (RR) 1.23, 95% CI 1.12-1.35] and albendazole (RR 1.35, 95% CI 1.21-1.50). Taking into consideration clinical efficacy, side effects and amount of the evidence, tinidazole was found to be the most effective drug. Conclusions: We provide additional evidence that single-dose tinidazole is the best available treatment for giardiasis in symptomatic and asymptomatic children and adults.
Serotype-Specific Correlates of Protection for Pneumococcal Carriage: An Analysis of Immunity in 19 Countries
Background. Pneumococcal conjugate vaccines (PCVs) provide direct protection against disease in those vaccinated, and interrupt transmission through the prevention of nasopharyngeal (NP) carriage. Methods. We analyzed immunogenicity data from 5224 infants who received PCV in prime-boost schedules. We defned any increase in antibody between the 1-month postpriming visit and the booster dose as an indication of NP carriage ("seroincidence"). We calculated antibody concentrations using receiver operating characteristic curves, and used generalized additive models to compute their protective efcacy against seroincidence. To support seroincidence as a marker of carriage, we compared seroincidence in a randomized immunogenicity trial in Nepal with the serotype-specifc prevalence of carriage in the same community. Results. In Nepalese infants, seroincidence of carriage closely correlated with serotype-specifc carriage prevalence in the community. In the larger data set, antibody concentrations associated with seroincidence were lowest for serotypes 6B and 23F (0.50 μg/mL and 0.63 μg/mL, respectively), and highest for serotypes 19F and 14 (2.54 μg/mL and 2.48 μg/mL, respectively). Te protective efcacy of antibody at these levels was 62% and 74% for serotypes 6B and 23F, and 87% and 84% for serotypes 19F and 14. Protective correlates were on average 2.15 times higher in low/lower middle-income countries than in high/upper middle-income countries (geometric mean ratio, 2.15 [95% confdence interval, 1.46-3.17]; P =.0024). Conclusions. Antibody concentrations associated with protection vary between serotypes. Higher antibody concentrations are required for protection in low-income countries. Tese fndings are important for global vaccination policy, to interrupt transmission by protecting against carriage.
Interactive visualisation for interpreting diagnostic test accuracy study results
Information about the performance of diagnostic tests is typically presented in the form of measures of test accuracy such as sensitivity and specificity. These measures may be difficult to translate directly into decisions about patient treatment, for which information presented in the form of probabilities of disease after a positive or a negative test result may be more useful. These probabilities depend on the prevalence of the disease, which is likely to vary between populations. This article aims to clarify the relationship between pre-test (prevalence) and post-test probabilities of disease, and presents two free, online interactive tools to illustrate this relationship. These tools allow probabilities of disease to be compared with decision thresholds above and below which different treatment decisions may be indicated. They are intended to help those involved in communicating information about diagnostic test performance and are likely to be of benefit when teaching these concepts. A substantive example is presented using C reactive protein as a diagnostic marker for bacterial infection in the older adult population. The tools may also be useful for manufacturers of clinical tests in planning product development, for authors of test evaluation studies to improve reporting and for users of test evaluations to facilitate interpretation and application of the results.
Post-imaging colorectal cancer or interval cancer rates after CT colonography: a systematic review and meta-analysis
Background: CT colonography is highly sensitive for colorectal cancer, but interval or post-imaging colorectal cancer rates (diagnosis of cancer after initial negative CT colonography) are unknown, as are their underlying causes. We did a systematic review and meta-analysis of post-CT colonography and post-imaging colorectal cancer rates and causes to address this gap in understanding. Methods: We systematically searched MEDLINE, Embase, and the Cochrane Central Register of Controlled Trials. We included randomised, cohort, cross-sectional, or case-control studies published between Jan 1, 1994, and Feb 28, 2017, using CT colonography done according to international consensus standards with the aim of detecting cancer or polyps, and reporting post-imaging colorectal cancer rates or sufficient data to allow their calculation. We excluded studies in which all CT colonographies were done because of incomplete colonoscopy or if CT colonography was done with knowledge of colonoscopy findings. We contacted authors of component studies for additional data where necessary for retrospective CT colonography image review and causes for each post-imaging colorectal cancer. Two independent reviewers extracted data from the study reports. Our primary outcome was prevalence of post-imaging colorectal cancer 36 months after CT colonography. We used random-effects meta-analysis to estimate pooled post-imaging colorectal cancer rates, expressed using the total number of cancers and total number of CT colonographies as denominators, and per 1000 person-years. This study is registered with PROSPERO, number CRD42016042437. Findings: 2977 articles were screened and 12 studies were eligible for analysis. These studies reported data for 19 867 patients (aged 18–96 years; of 11 590 with sex data available, 6532 [56%] were female) between March, 2002, and May, 2015. At a mean of 34 months' follow-up (range 3–128·4 months), CT colonography detected 643 colorectal cancers. 29 post-imaging colorectal cancers were subsequently diagnosed. The pooled post-imaging colorectal cancer rate was 4·42 (95% CI 3·03–6·42) per 100 cancers detected, corresponding to 1·61 (1·11–2·33) post-imaging colorectal cancers per 1000 CT colonographies or 0·64 (0·44–0·92) post-imaging colorectal cancers per 1000 person-years. Heterogeneity was low (I2=0%). 17 (61%) of 28 post-imaging colorectal cancers were attributable to perceptual error and were visible in retrospect. Interpretation: CT colonography does not lead to an excess of post-test cancers relative to colonoscopy within 3–5 years, and the low 5-year post-imaging colorectal cancer rate confirms that the recommended screening interval of 5 years is safe. Since most post-imaging colorectal cancers arise from perceptual errors, radiologist training and quality assurance could help to reduce post-imaging colorectal cancer rates. Funding: St Mark's Hospital Foundation and the UK National Institute for Health Research via the UCL/UCLH Biomedical Research Centre.
Screening for Hypertension in the INpatient Environment(SHINE): A protocol for a prospective study of diagnostic accuracy among adult hospital patients
Introduction A significant percentage of patients admitted to hospital have undiagnosed hypertension. However, present hypertension guidelines in the UK, Europe and USA do not define a blood pressure threshold at which hospital inpatients should be considered at risk of hypertension, outside of the emergency setting. The objective of this study is to identify the optimal in-hospital mean blood pressure threshold, above which patients should receive postdischarge blood pressure assessment in the community. Methods and analysis Screening for Hypertension in the INpatient Environment is a prospective diagnostic accuracy study. Patients admitted to hospital whose mean average daytime blood pressure after 24 hours or longer meets the study eligibility threshold for mean daytime blood pressure (≥120/70 mm Hg) and who have no prior diagnosis of, or medication for hypertension will be eligible. At 8 weeks postdischarge, recruited participants will wear an ambulatory blood pressure monitor for 24 hours. Mean daytime ambulatory blood pressure will be calculated to assess for the presence or absence of hypertension. Diagnostic performance of in-hospital blood pressure will be assessed by constructing receiver operator characteristic curves from participants' in-hospital mean systolic and mean diastolic blood pressure (index test) versus diagnosis of hypertension determined by mean daytime ambulatory blood pressure (reference test). Ethics and dissemination Ethical approval has been provided by the National Health Service Health Research Authority South Central-Oxford B Research Ethics Committee (19/SC/0026). Findings will be disseminated through national and international conferences, peer-reviewed journals and social media.
Development of practical recommendations for diagnostic accuracy studies in low-prevalence situations
Objective: Low disease prevalence poses challenges for diagnostic accuracy studies because of the large sample sizes that are required to obtain sufficient precision. The aim is to collate and discuss designs of diagnostic accuracy studies suited for use in low-prevalence situations. Study Design and Setting: We conducted a literature search including backward citation tracking and expert consultation. Two reviewers independently selected studies on designs for estimating diagnostic accuracy in a low-prevalence situation. During a 1-day expert meeting, all designs were discussed and recommendations were formulated. Results: We identified six designs for diagnostic accuracy studies that are suitable in low-prevalence situations because they reduced the total sample size or the number of patients undergoing the index test or reference standard depending on which poses the highest burden. We described the advantages and limitations of these designs and evaluated efficiencies in sample sizes, risk of bias, and alignment with the clinical pathway for applicability in routine care. Conclusion: Choosing a study design for diagnostic accuracy studies in low-prevalence situations should depend on whether the aim is to limit the number of patients undergoing the index test or reference standard, and the risk of bias associated with a particular design type.
Prediction of violent reoffending in prisoners and individuals on probation: a Dutch validation study (OxRec)
Scalable and transparent methods for risk assessment are increasingly required in criminal justice to inform decisions about sentencing, release, parole, and probation. However, few such approaches exist and their validation in external settings is typically lacking. A total national sample of all offenders (9072 released from prisoners and 6329 individuals on probation) from 2011–2012 in the Netherlands were followed up for violent and any reoffending over 2 years. The sample was mostly male (n = 574 [6%] were female prisoners and n = 784 [12%] were female probationers), and median ages were 30 in the prison sample and 34 in those on probation. Predictors for a scalable risk assessment tool (OxRec) were extracted from a routinely collected dataset used by criminal justice agencies, and outcomes from official criminal registers. OxRec’s predictive performance in terms of discrimination and calibration was tested. Reoffending rates in the Dutch prisoner cohort were 16% for 2-year violent reoffending and 44% for 2-year any reoffending, with lower rates in the probation sample. Discrimination as measured by the c-index was moderate, at 0.68 (95% CI: 0.66–0.70) for 2-year violent reoffending in prisoners and between 0.65 and 0.68 for other outcomes and the probation sample. The model required recalibration, after which calibration performance was adequate (e.g. calibration in the large was 1.0 for all scenarios). A recalibrated model for OxRec can be used in the Netherlands for individuals released from prison and individuals on probation to stratify their risk of future violent and any reoffending. The approach that we outline can be considered for external validations of criminal justice and clinical risk models.