Search results
Found 18254 matches for
We lead multidisciplinary applied research and training to rethink the way health care is delivered in general practice and across the community.
Comparative Risk of Major Congenital Malformations With Antiseizure Medication Combinations vs Valproate Monotherapy in Pregnancy.
BACKGROUND AND OBJECTIVES: Valproate should be avoided in pregnancy, but it is the most effective drug for generalized epilepsies. Alternative treatment may require combinations of other drugs. Our objectives were to describe first trimester use of antiseizure medication (ASM) combinations that are relevant alternatives to valproate and determine whether specific combinations were associated with a lower risk of major congenital malformations (MCM) compared with valproate monotherapy. METHODS: We conducted a population-based cohort study using linked national registers from Denmark, Finland, Iceland, Norway, and Sweden and administrative health care data from the United States and New South Wales, Australia. We described first trimester use of ASM combinations among pregnant people with epilepsy from 2000 to 2020. We compared the risk of MCM after first trimester exposure to ASM combinations vs valproate monotherapy and low-dose valproate plus lamotrigine or levetiracetam vs high-dose valproate (≥1,000 mg/d). We used log-binomial regression with propensity score weights to calculate adjusted risk ratios (aRRs) and 95% CIs for each dataset. Results were pooled using fixed-effects meta-analysis. RESULTS: Among 50,905 pregnancies in people with epilepsy identified from 7.8 million total pregnancies, 788 used lamotrigine and levetiracetam, 291 used lamotrigine and topiramate, 208 used levetiracetam and topiramate, 80 used lamotrigine and zonisamide, and 91 used levetiracetam and zonisamide. After excluding pregnancies with use of other ASMs, known teratogens, or a child diagnosed with MCM of infectious or genetic cause, we compared 587 exposed to lamotrigine-levetiracetam duotherapy and 186 exposed to lamotrigine-topiramate duotherapy with 1959 exposed to valproate monotherapy. Pooled aRRs were 0.41 (95% CI 0.24-0.69) and 1.26 (0.71-2.23), respectively. Duotherapy combinations containing low-dose valproate were infrequent, and comparisons with high-dose valproate monotherapy were inconclusive but suggested a lower risk for combination therapy. Other combinations were too rare for comparative safety analyses. DISCUSSION: Lamotrigine-levetiracetam duotherapy in first trimester was associated with a 60% lower risk of MCM than valproate monotherapy, while lamotrigine-topiramate was not associated with a reduced risk. Duotherapy with lamotrigine and levetiracetam may be favored to treat epilepsy in people with childbearing potential compared with valproate regarding MCM, but whether this combination is as effective as valproate remains to be determined. CLASSIFICATION OF EVIDENCE: This study provides Class II evidence that in people with epilepsy treated in the first trimester of pregnancy, the risk of major congenital malformations is lower with lamotrigine-levetiracetam duotherapy than with valproate alone, but similar with lamotrigine-topiramate.
Diagnostic accuracy of Fever-PAIN and Centor criteria for bacterial throat infection in adults with sore throat: a secondary analysis of a randomised controlled trial
Background: Sore throat is a common and self-limiting condition. There remains ambiguity in stratifying patients to immediate, delayed, or no antibiotic prescriptions. The National Institute for Health and Care Excellence (NICE) recommends two clinical prediction rules (CPRs), FeverPAIN and Centor, to guide decision making. Aim: To describe the diagnostic accuracy of CPRs in identifying streptococcal throat infections. Design & setting: Adults presenting to UK primary care with sore throat, who did not require immediate antibiotics. Method: As part of the Treatment Options without Antibiotics for Sore Throat (TOAST) trial, 565 participants, aged ≥18 years, were recruited on day of presentation to general practice. Physicians could opt to give delayed prescriptions. CPR scores were not part of the trial protocol but were calculated post hoc from baseline assessments. Diagnostic accuracy was calculated by comparing scores with throat swab cultures. Results: It was found that 81/502 (16.1%) patients had group A, C, or G streptococcus cultured on throat swab. Overall diagnostic accuracy of both CPRs was poor: area under receiver operating characteristics (ROC) curve 0.62 for Centor; and 0.59 for FeverPAIN. Post-test probability of a positive or negative test was 27.3% (95% confidence interval [CI] = 6.0% to 61.0%) and 84.1% (95% CI = 80.6% to 87.2%) for FeverPAIN ≥4; versus 25.7% (95% CI = 16.2% to 37.2%) and 85.5% (95% CI = 81.8% to 88.7%) for Centor ≥3. Higher CPR scores were associated with increased delayed antibiotic prescriptions (χ2 = 8.42, P = 0.004 for FeverPAIN ≥4; χ2 = 32.0, P<0.001 for Centor ≥3). Conclusion: In those who do not require immediate antibiotics in primary care, neither CPR provides a reliable way of diagnosing streptococcal throat infection. However, clinicians were more likely to give delayed prescriptions to those with higher scores
The prediction of suicide in severe mental illness: development and validation of a clinical prediction rule (OxMIS)
Assessment of suicide risk in individuals with severe mental illness is currently inconsistent, and based on clinical decision-making with or without tools developed for other purposes. We aimed to develop and validate a predictive model for suicide using data from linked population-based registers in individuals with severe mental illness. A national cohort of 75,158 Swedish individuals aged 15–65 with a diagnosis of severe mental illness (schizophrenia-spectrum disorders, and bipolar disorder) with 574,018 clinical patient episodes between 2001 and 2008, split into development (58,771 patients, 494 suicides) and external validation (16,387 patients, 139 suicides) samples. A multivariable derivation model was developed to determine the strength of pre-specified routinely collected socio-demographic and clinical risk factors, and then tested in external validation. We measured discrimination and calibration for prediction of suicide at 1 year using specified risk cut-offs. A 17-item clinical risk prediction model for suicide was developed and showed moderately good measures of discrimination (c-index 0.71) and calibration. For risk of suicide at 1 year, using a pre-specified 1% cut-off, sensitivity was 55% (95% confidence interval [CI] 47–63%) and specificity was 75% (95% CI 74–75%). Positive and negative predictive values were 2% and 99%, respectively. The model was used to generate a simple freely available web-based probability-based risk calculator (Oxford Mental Illness and Suicide tool or OxMIS) without categorical cut-offs. A scalable prediction score for suicide in individuals with severe mental illness is feasible. If validated in other samples and linked to effective interventions, using a probability score may assist clinical decision-making.
Evaluation of the diagnostic accuracy of two point-of-care tests for COVID-19 when used in symptomatic patients in community settings in the UK primary care COVID diagnostic accuracy platform trial (RAPTOR-C19)
Background and objective Point-of-care lateral flow device antigen testing has been used extensively to identify individuals with active SARS-CoV-2 infection in the community. This study aimed to evaluate the diagnostic accuracy of two point-of-care tests (POCTs) for SARS-CoV-2 in routine community care. Methods Adults and children with symptoms consistent with suspected current COVID-19 infection were prospectively recruited from 19 UK general practices and two COVID-19 testing centres between October 2020 and October 2021. Participants were tested by trained healthcare workers using at least one of two index POCTs (Roche-branded SD Biosensor Standard™ Q SARS-CoV-2 Rapid Antigen Test and/or BD Veritor™ System for Rapid Detection of SARS-CoV-2). The reference standard was laboratory triplex reverse transcription quantitative PCR (RT-PCR) using a combined nasal/oropharyngeal swab. Diagnostic accuracy parameters were estimated, with 95% confidence intervals (CIs), overall, in relation to RT-PCR cycle threshold and in pre-specified subgroups.
Perceptions on undertaking regular asymptomatic self-testing for COVID-19 using lateral flow tests: A qualitative study of university students and staff
Objectives Successful implementation of asymptomatic testing programmes using lateral flow tests (LFTs) depends on several factors, including feasibility, acceptability and how people act on test results. We aimed to examine experiences of university students and staff of regular asymptomatic self-testing using LFTs, and their subsequent behaviours. Design and setting A qualitative study using semistructured remote interviews and qualitative survey responses, which were analysed thematically. Participants People who were participating in weekly testing feasibility study, between October 2020 and January 2021, at the University of Oxford. Results We interviewed 18 and surveyed 214 participants. Participants were motivated to regularly self-test as they wanted to know whether or not they were infected with SARS-CoV-2. Most reported that a negative test result did not change their behaviour, but it did provide them with reassurance to engage with permitted activities. In contrast, some participants reported making decisions about visiting other people because they felt reassured by a negative test result. Participants valued the training but some still doubted their ability to carry out the test. Participants were concerned about safety of attending test sites with lots of people and reported home testing was most convenient. Conclusions Clear messages highlighting the benefits of regular testing for family, friends and society in identifying asymptomatic cases are needed. This should be coupled with transparent communication about the accuracy of LFTs and how to act on either a positive or negative result. Concerns about safety, convenience of testing and ability to do tests need to be addressed to ensure successful scaling up of asymptomatic testing.
Feasibility and Acceptability of Community Coronavirus Disease 2019 Testing Strategies (FACTS) in a University Setting
Background: During the coronavirus disease 2019 (COVID-19) pandemic in 2020, the UK government began a mass severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) testing program. This study aimed to determine the feasibility and acceptability of organized regular self-testing for SARS-CoV-2. Methods: This was a mixed-methods observational cohort study in asymptomatic students and staff at University of Oxford, who performed SARS-CoV-2 antigen lateral flow self-testing. Data on uptake and adherence, acceptability, and test interpretation were collected via a smartphone app, an online survey, and qualitative interviews. Results: Across 3 main sites, 551 participants (25% of those invited) performed 2728 tests during a follow-up of 5.6 weeks; 447 participants (81%) completed at least 2 tests, and 340 (62%) completed at least 4. The survey, completed by 214 participants (39%), found that 98% of people were confident to self-test and believed self-testing to be beneficial. Acceptability of self-testing was high, with 91% of ratings being acceptable or very acceptable. A total of 2711 (99.4%) test results were negative, 9 were positive, and 8 were inconclusive. Results from 18 qualitative interviews with students and staff revealed that participants valued regular testing, but there were concerns about test accuracy that impacted uptake and adherence. Conclusions: This is the first study to assess feasibility and acceptability of regular SARS-CoV-2 self-testing. It provides evidence to inform recruitment for, adherence to, and acceptability of regular SARS-CoV-2 self-testing programs for asymptomatic individuals using lateral flow tests. We found that self-testing is acceptable and people were able to interpret results accurately.
The predictive performance of criminal risk assessment tools used at sentencing: Systematic review of validation studies
Although risk assessment tools have been widely used to inform sentencing decisions, there is uncertainty about the extent and quality of evidence of their predictive performance when validated in new samples. Following PRISMA guidelines, we conducted a systematic review of validation studies of 11 commonly used risk assessment tools for sentencing. We identified 36 studies with 597,665 participants, among which were 27 independent validation studies with 177,711 individuals. Overall, the predictive performance of the included risk assessment tools was mixed, and ranged from poor to moderate. Tool performance was typically overestimated in studies with smaller sample sizes or studies in which tool developers were co-authors. Most studies only reported area under the curve (AUC), which ranged from 0.57 to 0.75 in independent studies with more than 500 participants. The majority did not report key performance measures, such as calibration and rates of false positives and negatives. In addition, most validation studies had a high risk of bias, partly due to inappropriate analytical approach used. We conclude that the research priority is for future investigations to address the key methodological shortcomings identified in this review, and policy makers should enable this research. More sufficiently powered independent validation studies are necessary.
Incidence, risk factors, and health service burden of sequelae of campylobacter and non-typhoidal salmonella infections in England, 2000–2015: A retrospective cohort study using linked electronic health records
Background: Reactive arthritis, irritable bowel syndrome (IBS), Guillain-Barré syndrome, ulcerative colitis, and Crohn's disease may be sequelae of Campylobacter or non-typhoidal Salmonella (NTS) infections. Proton pump inhibitors (PPI) and antibiotics may increase the risk of gastrointestinal infections (GII); however, their impact on sequelae onset is unclear. We investigated the incidence of sequelae, their association with antibiotics and PPI prescription, and assessed the economic impact on the NHS. Methods: Data from the Clinical Practice Research Datalink for patients consulting their GP for Campylobacter or NTS infection, during 2000–2015, were linked to hospital, mortality, and Index of Multiple Deprivation data. We estimated the incidence of sequelae and deaths in the 12 months following GII. We conducted logistic regression modelling for the adjusted association with prescriptions. We compared differences in resource use and costs pre- and post-infection amongst patients with and without sequelae. Findings: Of 20,471 patients with GII (Campylobacter 17,838), less than 2% (347) developed sequelae, with IBS (268) most common. Amongst Campylobacter patients, those with prescriptions for PPI within 12 months before and cephalosporins within 7-days before/after infection had elevated risk of IBS (adjusted odds ratio [aOR] 2.1, 1.5–2.9) and (aOR 3.6, 1.1–11.7) respectively. Campylobacter sequelae led to ∼ £1.3 million, (£750,000, £1.7 million) in additional annual NHS expenditure. Interpretation: Sequelae of Campylobacter and NTS infections are rare but associated with increased NHS costs. Prior prescription of PPI may be a modifiable risk factor. Incidence of sequelae, healthcare resource use and costs are essential parameters for future burden of disease studies.
Do prevalence expectations affect patterns of visual search and decision-making in interpreting CT colonography endoluminal videos?
Objective: To assess the effect of expected abnormality prevalence on visual search and decision-making in CT colonography (CTC). Methods: 13 radiologists interpreted endoluminal CTC flythroughs of the same group of 10 patient cases, 3 times each. Abnormality prevalence was fixed (50%), but readers were told, before viewing each group, that prevalence was either 20%, 50% or 80% in the population from which cases were drawn. Infrared visual search recording was used. Readers indicated seeing a polyp by clicking a mouse. Multilevel modelling quantified the effect of expected prevalence on outcomes. Results: Differences between expected prevalence were not statistically significant for time to first pursuit of the polyp (median 0.5 s, each prevalence), pursuit rate when no polyp was on screen (median 2.7 s21, each prevalence) or number of mouse clicks [mean 0.75/ video (20% prevalence), 0.93 (50%), 0.97 (80%)]. There was weak evidence of increased tendency to look outside the central screen area at 80% prevalence and reduction in positive polyp identifications at 20% prevalence. Conclusion: This study did not find a large effect of prevalence information on most visual search metrics or polyp identification in CTC. Further research is required to quantify effects at lower prevalence and in relation to secondary outcome measures.
Prediction of violent reoffending on release from prison: Derivation and external validation of a scalable tool
Background: More than 30 million people are released from prison worldwide every year, who include a group at high risk of perpetrating interpersonal violence. Because there is considerable inconsistency and inefficiency in identifying those who would benefit from interventions to reduce this risk, we developed and validated a clinical prediction rule to determine the risk of violent offending in released prisoners. Methods: We did a cohort study of a population of released prisoners in Sweden. Through linkage of population-based registers, we developed predictive models for violent reoffending for the cohort. First, we developed a derivation model to determine the strength of prespecified, routinely obtained criminal history, sociodemographic, and clinical risk factors using multivariable Cox proportional hazard regression, and then tested them in an external validation. We measured discrimination and calibration for prediction of our primary outcome of violent reoffending at 1 and 2 years using cutoffs of 10% for 1-year risk and 20% for 2-year risk. Findings: We identified a cohort of 47 326 prisoners released in Sweden between 2001 and 2009, with 11 263 incidents of violent reoffending during this period. We developed a 14-item derivation model to predict violent reoffending and tested it in an external validation (assigning 37 100 individuals to the derivation sample and 10 226 to the validation sample). The model showed good measures of discrimination (Harrell's c-index 0·74) and calibration. For risk of violent reoffending at 1 year, sensitivity was 76% (95% CI 73-79) and specificity was 61% (95% CI 60-62). Positive and negative predictive values were 21% (95% CI 19-22) and 95% (95% CI 94-96), respectively. At 2 years, sensitivity was 67% (95% CI 64-69) and specificity was 70% (95% CI 69-72). Positive and negative predictive values were 37% (95% CI 35-39) and 89% (95% CI 88-90), respectively. Of individuals with a predicted risk of violent reoffending of 50% or more, 88% had drug and alcohol use disorders. We used the model to generate a simple, web-based, risk calculator (OxRec) that is free to use. Interpretation: We have developed a prediction model in a Swedish prison population that can assist with decision making on release by identifying those who are at low risk of future violent offending, and those at high risk of violent reoffending who might benefit from drug and alcohol treatment. Further assessments in other populations and countries are needed. Funding: Wellcome Trust, the Swedish Research Council, and the Swedish Research Council for Health, Working Life and Welfare.
Associations between circadian rhythm instability, appraisal style and mood in bipolar disorder
Background Internal appraisal styles, in addition to circadian and social rhythm instability, have been implicated in the development of mood experiences in bipolar disorder (BD), yet potential interactions between these variables remain under researched. Methods This study used online questionnaires to examine relationships between social and circadian rhythm instability, appraisal style and mood within populations at varying vulnerability for BD. Results Participants with BD (n=51), and those at behavioural high-risk (BHR; n=77), exhibited poor sleep quality and a stronger tendency to form internal appraisals of both positive and negative experiences compared to non-clinical controls (n=498) and participants with fibromyalgia (n=80). Participants with BD also exhibited a stronger tendency to adopt an internal, negative appraisal style compared to individuals at BHR. Sleep disturbance and internal appraisal styles were significantly associated with low mood in BD. Limitations Sleep quality and social rhythm stability were assessed using self-report measures only, which may differ from objective measures. Causal relationships between constructs could not be examined due to the cross-sectional design. Conclusions The findings suggest the importance of attending to internal appraisal styles and sleep quality when working therapeutically with individuals diagnosed with BD. Potential differences in the effect of appraisal style at the state and trait level warrant further exploration.
Assessing sex-differences and the effect of timing of vaccination on immunogenicity, reactogenicity and efficacy of vaccines in young children: Study protocol for an individual participant data meta-analysis of randomised controlled trials
Introduction: Disease incidence differs between males and females for some infectious or inflammatory diseases. Sex-differences in immune responses to some vaccines have also been observed, mostly to viral vaccines in adults. Little evidence is available on whether sex-differences occur in response to immunisation in infancy even though this is the age group in which most vaccines are administered. Factors other than sex, such as timing or coadministration of other vaccines, can also influence the immune response to vaccination. Methods and analysis: Individual participant data meta-analysis of randomised controlled trials of vaccines in healthy infants and young children will be conducted. Fully anonymised data from i170 randomised controlled trials of vaccines for diphtheria, tetanus, Bordetella pertussis, polio, Haemophilus influenzae type B, hepatitis B, Streptococcus pneumoniae, Neisseria meningitidis, measles, mumps, rubella, varicella and rotavirus will be combined for analysis. Outcomes include measures of immunogenicity (immunoglobulins), reactogenicity, safety and disease-specific clinical efficacy. Data from trials of vaccines containing similar components will be combined in hierarchical models and the effect of sex and timing of vaccinations estimated for each outcome separately. Ethics and dissemination: Systematic reviews of published estimates of sex-differences cannot adequately answer questions in this field since such comparisons are never the main purpose of a clinical trial, thus a large degree of reporting bias exists in the published literature. Recent improvements in the widespread availability of individual participant data from randomised controlled trials makes it feasible to conduct extensive individual participant data metaanalyses which were previously impossible, thereby reducing the effect of publication or reporting bias on the understanding of the infant immune response.
A national physician survey of diagnostic error in paediatrics
This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4–20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. Conclusion: We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies.What is Known:• Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10–15 %.• They are multifactorial in origin and include cognitive, system-related and situational factors.What is New:• We identified a low rate of self-perceived diagnostic error in contrast to the existing literature.• Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.
Interventions to reduce harm from continued tobacco use
Background: Although smoking cessation is currently the only guaranteed way to reduce the harm caused by tobacco smoking, a reasonable secondary tobacco control approach may be to try and reduce the harm from continued tobacco use amongst smokers unable or unwilling to quit. Possible approaches to reduce the exposure to toxins from smoking include reducing the amount of tobacco used, and using less toxic products, such as pharmaceutical, nicotine and potential reduced-exposure tobacco products (PREPs), as an alternative to cigarettes. Objectives: To assess the effects of interventions intended to reduce the harm to health of continued tobacco use, we considered the following specific questions: do interventions intended to reduce harm have an effect on long-term health status?; do they lead to a reduction in the number of cigarettes smoked?; do they have an effect on smoking abstinence?; do they have an effect on biomarkers of tobacco exposure?; and do they have an effect on biomarkers of damage caused by tobacco? Search methods: We searched the Cochrane Tobacco Addiction Group Trials Register (CRS) on the 21st October 2015, using free-text and MeSH terms for harm reduction, smoking reduction and cigarette reduction. Selection criteria: Randomized or quasi-randomized controlled trials of interventions to reduce the amount smoked, or to reduce harm from smoking by means other than cessation. We include studies carried out in smokers with no immediate desire to quit all tobacco use. Primary outcomes were change in cigarette consumption, smoking cessation and any markers of damage or benefit to health, measured at least six months from the start of the intervention. Data collection and analysis: We assessed study eligibility for inclusion using standard Cochrane methods. We pooled trials with similar interventions and outcomes (> 50% reduction in cigarettes a day (CPD) and long-term smoking abstinence), using fixed-effect models. Where it was not possible to meta-analyse data, we summarized findings narratively. Main results: Twenty-four trials evaluated interventions to help those who smoke to cut down the amount smoked or to replace their regular cigarettes with PREPs, compared to placebo, brief intervention, or a comparison intervention. None of these trials directly tested whether harm reduction strategies reduced the harms to health caused by smoking. Most trials (14/24) tested nicotine replacement therapy (NRT) as an intervention to assist reduction. In a pooled analysis of eight trials, NRT significantly increased the likelihood of reducing CPD by at least 50% for people using nicotine gum or inhaler or a choice of product compared to placebo (risk ratio (RR) 1.75, 95% confidence interval (CI) 1.44 to 2.13; 3081 participants). Where average changes from baseline were compared for different measures, carbon monoxide (CO) and cotinine generally showed smaller reductions than CPD. Use of NRT versus placebo also significantly increased the likelihood of ultimately quitting smoking (RR 1.87, 95% CI 1.43 to 2.44; 8 trials, 3081 participants; quality of the evidence: low). Two trials comparing NRT and behavioural support to brief advice found a significant effect on reduction, but no significant effect on cessation. We found one trial investigating each of the following harm reduction intervention aids: bupropion, varenicline, electronic cigarettes, snus, plus another of nicotine patches to facilitate temporary abstinence. The evidence for all five intervention types was therefore imprecise, and it is unclear whether or not these aids increase the likelihood of smoking reduction or cessation. Two trials investigating two different types of behavioural advice and instructions on reducing CPD also provided imprecise evidence. Therefore, the evidence base for this comparison is inadequate to support the use of these types of behavioural advice to reduce smoking. Four studies of PREPs (cigarettes with reduced levels of tar, carbon and nicotine, and in one case delivered using an electronically-heated cigarette smoking system) showed some reduction in exposure to some toxicants, but it is unclear whether this would substantially alter the risk of harm. We judged the included studies to be generally at a low or unclear risk of bias; however, there were some ratings of high risk, due to a lack of blinding and the potential for detection bias. Using the GRADE system, we rated the overall quality of the evidence for our cessation outcomes as 'low' or 'very low', due to imprecision and indirectness. A 'low' grade means that further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. A 'very low' grade means we are very uncertain about the estimate. Authors' conclusions: People who do not wish to quit can be helped to cut down the number of cigarettes they smoke and to quit smoking in the long term, using NRT, despite original intentions not to do so. However, we rated the evidence contributing to the cessation outcome for NRT as 'low' by GRADE standards. There is a lack of evidence to support the use of other harm reduction aids to reduce the harm caused by continued tobacco smoking. This could simply be due to the lack of high-quality studies (our confidence in cessation outcomes for these aids is rated 'low' or 'very low' due to imprecision by GRADE standards), meaning that we may have missed a worthwhile effect, or due to a lack of effect on reduction or quit rates. It is therefore important that more high-quality RCTs are conducted, and that these also measure the long-term health effects of treatments.
Fluid responsiveness prediction using Vigileo FloTrac measured cardiac output changes during passive leg raise test
Background: Passive leg raising (PLR) is a so called self-volume challenge used to test for fluid responsiveness. Changes in cardiac output (CO) or stroke volume (SV) measured during PLR are used to predict the need for subsequent fluid loading. This requires a device that can measure CO changes rapidly. The Vigileo™ monitor, using third-generation software, allows continuous CO monitoring. The aim of this study was to compare changes in CO (measured with the Vigileo device) during a PLR manoeuvre to calculate the accuracy for predicting fluid responsiveness. Methods: This is a prospective study in a 20-bedded mixed general critical care unit in a large non-university regional referral hospital. Fluid responders were defined as having an increase in CO of greater than 15 % following a fluid challenge. Patients meeting the criteria for circulatory shock with a Vigileo™ monitor (Vigileo™; FloTrac; Edwards™; Lifesciences, Irvine, CA, USA) already in situ, and assessed as requiring volume expansion by the clinical team based on clinical criteria, were included. All patients underwent a PLR manoeuvre followed by a fluid challenge. Results: Data was collected and analysed on stroke volume variation (SVV) at baseline and CO and SVV changes during the PLR manoeuvre and following a subsequent fluid challenge in 33 patients. The majority had septic shock. Patient characteristics, baseline haemodynamic variables and baseline vasoactive infusion requirements were similar between fluid responders (10 patients) and non-responders (23 patients). Peak increase in CO occurred within 120 s during the PLR in all cases. Using an optimal cut point of 9 % increase in CO during the PLR produced an area under the receiver operating characteristic curve of 0.85 (95 % CI 0.63 to 1.00) with a sensitivity of 80 % (95 % CI 44 to 96 %) and a specificity of 91 % (95 % CI 70 to 98 %). Conclusions: CO changes measured by the Vigileo™ monitor using third-generation software during a PLR test predict fluid responsiveness in mixed medical and surgical patients with vasopressor-dependent circulatory shock.
Factors Associated with Sequelae of Campylobacter and Non-typhoidal Salmonella Infections: A Systematic Review
Despite the significant global burden of gastroenteritis and resulting sequelae, there is limited evidence on risk factors for sequelae development. We updated and extended previous systematic reviews by assessing the role of antibiotics, proton pump inhibitors (PPI) and symptom severity in the development of sequelae following campylobacteriosis and salmonellosis. We searched four databases, including PubMed, from 1 January 2011 to 29 April 2016. Observational studies reporting sequelae of reactive arthritis (ReA), Reiter's syndrome (RS), irritable bowel syndrome (IBS) and Guillain-Barré syndrome (GBS) following gastroenteritis were included. The primary outcome was incidence of sequelae of interest amongst cases of campylobacteriosis and salmonellosis. A narrative synthesis was conducted where heterogeneity was high. Of the 55 articles included, incidence of ReA (n = 37), RS (n = 5), IBS (n = 12) and GBS (n = 9) were reported following campylobacteriosis and salmonellosis. A pooled summary for each sequela was not estimated due to high level of heterogeneity across studies (I2 > 90%). PPI usage and symptoms were sparsely reported. Three out of seven studies found a statistically significant association between antibiotics usage and development of ReA. Additional primary studies investigating risk modifying factors in sequelae of GI infections are required to enable targeted interventions.