{ "items": [ "\n\n
Background: Reactive arthritis, irritable bowel syndrome (IBS), Guillain-Barr\u00e9 syndrome, ulcerative colitis, and Crohn's disease may be sequelae of Campylobacter or non-typhoidal Salmonella (NTS) infections. Proton pump inhibitors (PPI) and antibiotics may increase the risk of gastrointestinal infections (GII); however, their impact on sequelae onset is unclear. We investigated the incidence of sequelae, their association with antibiotics and PPI prescription, and assessed the economic impact on the NHS. Methods: Data from the Clinical Practice Research Datalink for patients consulting their GP for Campylobacter or NTS infection, during 2000\u20132015, were linked to hospital, mortality, and Index of Multiple Deprivation data. We estimated the incidence of sequelae and deaths in the 12 months following GII. We conducted logistic regression modelling for the adjusted association with prescriptions. We compared differences in resource use and costs pre- and post-infection amongst patients with and without sequelae. Findings: Of 20,471 patients with GII (Campylobacter 17,838), less than 2% (347) developed sequelae, with IBS (268) most common. Amongst Campylobacter patients, those with prescriptions for PPI within 12 months before and cephalosporins within 7-days before/after infection had elevated risk of IBS (adjusted odds ratio [aOR] 2.1, 1.5\u20132.9) and (aOR 3.6, 1.1\u201311.7) respectively. Campylobacter sequelae led to \u223c \u00a31.3 million, (\u00a3750,000, \u00a31.7 million) in additional annual NHS expenditure. Interpretation: Sequelae of Campylobacter and NTS infections are rare but associated with increased NHS costs. Prior prescription of PPI may be a modifiable risk factor. Incidence of sequelae, healthcare resource use and costs are essential parameters for future burden of disease studies.
\n \n\n \n \nObjective: To assess the effect of expected abnormality prevalence on visual search and decision-making in CT colonography (CTC). Methods: 13 radiologists interpreted endoluminal CTC flythroughs of the same group of 10 patient cases, 3 times each. Abnormality prevalence was fixed (50%), but readers were told, before viewing each group, that prevalence was either 20%, 50% or 80% in the population from which cases were drawn. Infrared visual search recording was used. Readers indicated seeing a polyp by clicking a mouse. Multilevel modelling quantified the effect of expected prevalence on outcomes. Results: Differences between expected prevalence were not statistically significant for time to first pursuit of the polyp (median 0.5 s, each prevalence), pursuit rate when no polyp was on screen (median 2.7 s21, each prevalence) or number of mouse clicks [mean 0.75/ video (20% prevalence), 0.93 (50%), 0.97 (80%)]. There was weak evidence of increased tendency to look outside the central screen area at 80% prevalence and reduction in positive polyp identifications at 20% prevalence. Conclusion: This study did not find a large effect of prevalence information on most visual search metrics or polyp identification in CTC. Further research is required to quantify effects at lower prevalence and in relation to secondary outcome measures.
\n \n\n \n \nBackground: More than 30 million people are released from prison worldwide every year, who include a group at high risk of perpetrating interpersonal violence. Because there is considerable inconsistency and inefficiency in identifying those who would benefit from interventions to reduce this risk, we developed and validated a clinical prediction rule to determine the risk of violent offending in released prisoners. Methods: We did a cohort study of a population of released prisoners in Sweden. Through linkage of population-based registers, we developed predictive models for violent reoffending for the cohort. First, we developed a derivation model to determine the strength of prespecified, routinely obtained criminal history, sociodemographic, and clinical risk factors using multivariable Cox proportional hazard regression, and then tested them in an external validation. We measured discrimination and calibration for prediction of our primary outcome of violent reoffending at 1 and 2 years using cutoffs of 10% for 1-year risk and 20% for 2-year risk. Findings: We identified a cohort of 47 326 prisoners released in Sweden between 2001 and 2009, with 11 263 incidents of violent reoffending during this period. We developed a 14-item derivation model to predict violent reoffending and tested it in an external validation (assigning 37 100 individuals to the derivation sample and 10 226 to the validation sample). The model showed good measures of discrimination (Harrell's c-index 0\u00b774) and calibration. For risk of violent reoffending at 1 year, sensitivity was 76% (95% CI 73-79) and specificity was 61% (95% CI 60-62). Positive and negative predictive values were 21% (95% CI 19-22) and 95% (95% CI 94-96), respectively. At 2 years, sensitivity was 67% (95% CI 64-69) and specificity was 70% (95% CI 69-72). Positive and negative predictive values were 37% (95% CI 35-39) and 89% (95% CI 88-90), respectively. Of individuals with a predicted risk of violent reoffending of 50% or more, 88% had drug and alcohol use disorders. We used the model to generate a simple, web-based, risk calculator (OxRec) that is free to use. Interpretation: We have developed a prediction model in a Swedish prison population that can assist with decision making on release by identifying those who are at low risk of future violent offending, and those at high risk of violent reoffending who might benefit from drug and alcohol treatment. Further assessments in other populations and countries are needed. Funding: Wellcome Trust, the Swedish Research Council, and the Swedish Research Council for Health, Working Life and Welfare.
\n \n\n \n \nBackground Internal appraisal styles, in addition to circadian and social rhythm instability, have been implicated in the development of mood experiences in bipolar disorder (BD), yet potential interactions between these variables remain under researched. Methods This study used online questionnaires to examine relationships between social and circadian rhythm instability, appraisal style and mood within populations at varying vulnerability for BD. Results Participants with BD (n=51), and those at behavioural high-risk (BHR; n=77), exhibited poor sleep quality and a stronger tendency to form internal appraisals of both positive and negative experiences compared to non-clinical controls (n=498) and participants with fibromyalgia (n=80). Participants with BD also exhibited a stronger tendency to adopt an internal, negative appraisal style compared to individuals at BHR. Sleep disturbance and internal appraisal styles were significantly associated with low mood in BD. Limitations Sleep quality and social rhythm stability were assessed using self-report measures only, which may differ from objective measures. Causal relationships between constructs could not be examined due to the cross-sectional design. Conclusions The findings suggest the importance of attending to internal appraisal styles and sleep quality when working therapeutically with individuals diagnosed with BD. Potential differences in the effect of appraisal style at the state and trait level warrant further exploration.
\n \n\n \n \nIntroduction: Disease incidence differs between males and females for some infectious or inflammatory diseases. Sex-differences in immune responses to some vaccines have also been observed, mostly to viral vaccines in adults. Little evidence is available on whether sex-differences occur in response to immunisation in infancy even though this is the age group in which most vaccines are administered. Factors other than sex, such as timing or coadministration of other vaccines, can also influence the immune response to vaccination. Methods and analysis: Individual participant data meta-analysis of randomised controlled trials of vaccines in healthy infants and young children will be conducted. Fully anonymised data from i170 randomised controlled trials of vaccines for diphtheria, tetanus, Bordetella pertussis, polio, Haemophilus influenzae type B, hepatitis B, Streptococcus pneumoniae, Neisseria meningitidis, measles, mumps, rubella, varicella and rotavirus will be combined for analysis. Outcomes include measures of immunogenicity (immunoglobulins), reactogenicity, safety and disease-specific clinical efficacy. Data from trials of vaccines containing similar components will be combined in hierarchical models and the effect of sex and timing of vaccinations estimated for each outcome separately. Ethics and dissemination: Systematic reviews of published estimates of sex-differences cannot adequately answer questions in this field since such comparisons are never the main purpose of a clinical trial, thus a large degree of reporting bias exists in the published literature. Recent improvements in the widespread availability of individual participant data from randomised controlled trials makes it feasible to conduct extensive individual participant data metaanalyses which were previously impossible, thereby reducing the effect of publication or reporting bias on the understanding of the infant immune response.
\n \n\n \n \nThis cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54\u00a0% (n\u00a0=\u00a0127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4\u201320\u00a0years). A diagnostic error was reported at least monthly by 19 (15.0\u00a0%) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value\u00a0=\u00a00.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. Conclusion: We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies.What is Known:\u2022 Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10\u201315\u00a0%.\u2022 They are multifactorial in origin and include cognitive, system-related and situational factors.What is New:\u2022 We identified a low rate of self-perceived diagnostic error in contrast to the existing literature.\u2022 Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.
\n \n\n \n \nBackground: Although smoking cessation is currently the only guaranteed way to reduce the harm caused by tobacco smoking, a reasonable secondary tobacco control approach may be to try and reduce the harm from continued tobacco use amongst smokers unable or unwilling to quit. Possible approaches to reduce the exposure to toxins from smoking include reducing the amount of tobacco used, and using less toxic products, such as pharmaceutical, nicotine and potential reduced-exposure tobacco products (PREPs), as an alternative to cigarettes. Objectives: To assess the effects of interventions intended to reduce the harm to health of continued tobacco use, we considered the following specific questions: do interventions intended to reduce harm have an effect on long-term health status?; do they lead to a reduction in the number of cigarettes smoked?; do they have an effect on smoking abstinence?; do they have an effect on biomarkers of tobacco exposure?; and do they have an effect on biomarkers of damage caused by tobacco? Search methods: We searched the Cochrane Tobacco Addiction Group Trials Register (CRS) on the 21st October 2015, using free-text and MeSH terms for harm reduction, smoking reduction and cigarette reduction. Selection criteria: Randomized or quasi-randomized controlled trials of interventions to reduce the amount smoked, or to reduce harm from smoking by means other than cessation. We include studies carried out in smokers with no immediate desire to quit all tobacco use. Primary outcomes were change in cigarette consumption, smoking cessation and any markers of damage or benefit to health, measured at least six months from the start of the intervention. Data collection and analysis: We assessed study eligibility for inclusion using standard Cochrane methods. We pooled trials with similar interventions and outcomes (> 50% reduction in cigarettes a day (CPD) and long-term smoking abstinence), using fixed-effect models. Where it was not possible to meta-analyse data, we summarized findings narratively. Main results: Twenty-four trials evaluated interventions to help those who smoke to cut down the amount smoked or to replace their regular cigarettes with PREPs, compared to placebo, brief intervention, or a comparison intervention. None of these trials directly tested whether harm reduction strategies reduced the harms to health caused by smoking. Most trials (14/24) tested nicotine replacement therapy (NRT) as an intervention to assist reduction. In a pooled analysis of eight trials, NRT significantly increased the likelihood of reducing CPD by at least 50% for people using nicotine gum or inhaler or a choice of product compared to placebo (risk ratio (RR) 1.75, 95% confidence interval (CI) 1.44 to 2.13; 3081 participants). Where average changes from baseline were compared for different measures, carbon monoxide (CO) and cotinine generally showed smaller reductions than CPD. Use of NRT versus placebo also significantly increased the likelihood of ultimately quitting smoking (RR 1.87, 95% CI 1.43 to 2.44; 8 trials, 3081 participants; quality of the evidence: low). Two trials comparing NRT and behavioural support to brief advice found a significant effect on reduction, but no significant effect on cessation. We found one trial investigating each of the following harm reduction intervention aids: bupropion, varenicline, electronic cigarettes, snus, plus another of nicotine patches to facilitate temporary abstinence. The evidence for all five intervention types was therefore imprecise, and it is unclear whether or not these aids increase the likelihood of smoking reduction or cessation. Two trials investigating two different types of behavioural advice and instructions on reducing CPD also provided imprecise evidence. Therefore, the evidence base for this comparison is inadequate to support the use of these types of behavioural advice to reduce smoking. Four studies of PREPs (cigarettes with reduced levels of tar, carbon and nicotine, and in one case delivered using an electronically-heated cigarette smoking system) showed some reduction in exposure to some toxicants, but it is unclear whether this would substantially alter the risk of harm. We judged the included studies to be generally at a low or unclear risk of bias; however, there were some ratings of high risk, due to a lack of blinding and the potential for detection bias. Using the GRADE system, we rated the overall quality of the evidence for our cessation outcomes as 'low' or 'very low', due to imprecision and indirectness. A 'low' grade means that further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. A 'very low' grade means we are very uncertain about the estimate. Authors' conclusions: People who do not wish to quit can be helped to cut down the number of cigarettes they smoke and to quit smoking in the long term, using NRT, despite original intentions not to do so. However, we rated the evidence contributing to the cessation outcome for NRT as 'low' by GRADE standards. There is a lack of evidence to support the use of other harm reduction aids to reduce the harm caused by continued tobacco smoking. This could simply be due to the lack of high-quality studies (our confidence in cessation outcomes for these aids is rated 'low' or 'very low' due to imprecision by GRADE standards), meaning that we may have missed a worthwhile effect, or due to a lack of effect on reduction or quit rates. It is therefore important that more high-quality RCTs are conducted, and that these also measure the long-term health effects of treatments.
\n \n\n \n \nBackground: Passive leg raising (PLR) is a so called self-volume challenge used to test for fluid responsiveness. Changes in cardiac output (CO) or stroke volume (SV) measured during PLR are used to predict the need for subsequent fluid loading. This requires a device that can measure CO changes rapidly. The Vigileo\u2122 monitor, using third-generation software, allows continuous CO monitoring. The aim of this study was to compare changes in CO (measured with the Vigileo device) during a PLR manoeuvre to calculate the accuracy for predicting fluid responsiveness. Methods: This is a prospective study in a 20-bedded mixed general critical care unit in a large non-university regional referral hospital. Fluid responders were defined as having an increase in CO of greater than 15 % following a fluid challenge. Patients meeting the criteria for circulatory shock with a Vigileo\u2122 monitor (Vigileo\u2122; FloTrac; Edwards\u2122; Lifesciences, Irvine, CA, USA) already in situ, and assessed as requiring volume expansion by the clinical team based on clinical criteria, were included. All patients underwent a PLR manoeuvre followed by a fluid challenge. Results: Data was collected and analysed on stroke volume variation (SVV) at baseline and CO and SVV changes during the PLR manoeuvre and following a subsequent fluid challenge in 33 patients. The majority had septic shock. Patient characteristics, baseline haemodynamic variables and baseline vasoactive infusion requirements were similar between fluid responders (10 patients) and non-responders (23 patients). Peak increase in CO occurred within 120 s during the PLR in all cases. Using an optimal cut point of 9 % increase in CO during the PLR produced an area under the receiver operating characteristic curve of 0.85 (95 % CI 0.63 to 1.00) with a sensitivity of 80 % (95 % CI 44 to 96 %) and a specificity of 91 % (95 % CI 70 to 98 %). Conclusions: CO changes measured by the Vigileo\u2122 monitor using third-generation software during a PLR test predict fluid responsiveness in mixed medical and surgical patients with vasopressor-dependent circulatory shock.
\n \n\n \n \nDespite the significant global burden of gastroenteritis and resulting sequelae, there is limited evidence on risk factors for sequelae development. We updated and extended previous systematic reviews by assessing the role of antibiotics, proton pump inhibitors (PPI) and symptom severity in the development of sequelae following campylobacteriosis and salmonellosis. We searched four databases, including PubMed, from 1 January 2011 to 29 April 2016. Observational studies reporting sequelae of reactive arthritis (ReA), Reiter's syndrome (RS), irritable bowel syndrome (IBS) and Guillain-Barr\u00e9 syndrome (GBS) following gastroenteritis were included. The primary outcome was incidence of sequelae of interest amongst cases of campylobacteriosis and salmonellosis. A narrative synthesis was conducted where heterogeneity was high. Of the 55 articles included, incidence of ReA (n\u00a0=\u00a037), RS (n\u00a0=\u00a05), IBS (n\u00a0=\u00a012) and GBS (n\u00a0=\u00a09) were reported following campylobacteriosis and salmonellosis. A pooled summary for each sequela was not estimated due to high level of heterogeneity across studies (I2\u00a0>\u00a090%). PPI usage and symptoms were sparsely reported. Three out of seven studies found a statistically significant association between antibiotics usage and development of ReA. Additional primary studies investigating risk modifying factors in sequelae of GI infections are required to enable targeted interventions.
\n \n\n \n \nThis is the protocol for a review and there is no abstract. The objectives are as follows: The main aim of this review will be to assess the effects of changing practitioner empathy or patient expectations for all conditions. The main objective is to conduct a systematic review of randomised trials where the intervention involves manipulating either (a) practitioner empathy or (b) patient expectations, or (c) both.
\n \n\n \n \nBackground: Acute lower respiratory tract infections (ALRTIs) account for most antibiotics prescribed in primary care despite lack of efficacy, partly due to clinician uncertainty about aetiology and patient concerns about illness course. Nucleic acid amplification tests could assist antibiotic targeting. Methods: In this prospective cohort study, 645 patients presenting to primary care with acute cough and suspected ALRTI, provided throat swabs at baseline. These were tested for respiratory pathogens by real-time polymerase chain reaction and classified as having a respiratory virus, bacteria, both or neither. Three hundred fifty-four participants scored the symptoms severity daily for 1 week in a diary (0 = absent to 4 = severe problem). Results: Organisms were identified in 346/645 (53.6%) participants. There were differences in the prevalence of seven symptoms between the organism groups at baseline. Those with a virus alone, and those with both virus and bacteria, had higher average severity scores of all symptoms combined during the week of follow-up than those in whom no organisms were detected [adjusted mean differences 0.204 (95% confidence interval 0.010 to 0.398) and 0.348 (0.098 to 0.598), respectively]. There were no differences in the duration of symptoms rated as moderate or severe between organism groups. Conclusions: Differences in presenting symptoms and symptoms severity can be identified between patients with viruses and bacteria identified on throat swabs. The magnitude of these differences is unlikely to influence management. Most patients had mild symptoms at 7 days regardless of aetiology, which could inform patients about likely symptom duration.
\n \n\n \n \nAims C-reactive protein (CRP) and neutrophil count (NC) are important diagnostic indicators of inflammation. Point-of-care (POC) technologies for these markers are available but rarely used in community settings in the UK. To inform the potential for POC tests, it is necessary to understand the demand for testing. We aimed to describe the frequency of CRP and NC test requests from primary care to central laboratory services, describe variability between practices and assess the relationship between the tests. Methods We described the number of patients with either or both laboratory tests, and the volume of testing per individual and per practice, in a retrospective cohort of all adults in general practices in Oxfordshire, 2014-2016. Results 372 017 CRP and 776 581 NC tests in 160 883 and 275 093 patients, respectively, were requested from 69 practices. CRP was tested mainly in combination with NC, while the latter was more often tested alone. The median (IQR) of CRP and NC tests/person tested was 1 (1-2) and 2 (1-3), respectively. The median (IQR) tests/practice/week was 36 (22-52) and 72 (50-108), and per 1000 persons registered/practice/week was 4 (3-5) and 8 (7-9), respectively. The median (IQR) CRP and NC concentrations were 2.7 (0.9-7.9) mg/dL and 4.1 (3.1-5.5)\u00d710 9 /L, respectively. Conclusions The high demand for CRP and NC testing in the community, and the range of results falling within the reportable range for current POC technologies highlight the opportunity for laboratory testing to be supplemented by POC testing in general practice.
\n \n\n \n \nDiagnostic tests play an important role in the clinical decision-making process by providing information that enables patients to be identified and stratified to the most appropriate treatment and management strategies. Decision analytic modelling facilitates the synthesis of evidence from multiple sources to evaluate the cost effectiveness of diagnostic tests. This study critically reviews the methods used to model the cost effectiveness of diagnostic tests in UK National Institute for Health Research (NIHR) Health Technology Assessment (HTA) reports. UK NIHR HTA reports published between 2009 and 2018 were screened to identify those reporting an economic evaluation of a diagnostic test using decision analytic modelling. Existing decision modelling checklists were identified in the literature and a modified checklist tailored to diagnostic economic evaluations was developed, piloted and used to assess the diagnostic models in HTA reports. Of 728 HTA reports published during the study period, 55 met the inclusion criteria. The majority of models performed well with a clearly defined decision problem and analytical perspective (89% of HTAs met the criterion). The model structure usually reflected the care pathway and progression of the health condition. However, there are areas requiring improvement. These are predominantly systematic identification of treatment effects (20% met), poor selection of comparators (50% met) and assumed independence of tests used in sequence (32% took correlation between sequential tests into consideration). The complexity and constraints of performing decision analysis of diagnostic tests on costs and health outcomes makes it particularly challenging and, as a result, quality issues remain. This review provides a comprehensive assessment of modelling in HTA reports, highlights problems and gives recommendations for future diagnostic modelling practice.
\n \n\n \n \nAims: Heart failure (HF) is a global health burden and new strategies to achieve timely diagnosis and early intervention are urgently needed. Natriuretic peptide (NP) testing can be used to screen for left ventricular systolic dysfunction (LVSD), but evidence on test performance is mixed, and international HF guidelines differ in their recommendations. Our aim was to summarize the evidence on diagnostic accuracy of NP screening for LVSD in general and high-risk community populations and estimate optimal screening thresholds. Methods: We searched relevant databases up to August 2020 for studies with a screened community population of over 100 adults reporting NP performance to diagnose LVSD. Study inclusion, quality assessment, and data extraction were conducted independently and in duplicate. Diagnostic test meta-analysis used hierarchical summary receiver operating characteristic curves to obtain estimates of pooled accuracy to detect LVSD, with optimal thresholds obtained to maximize the sum of sensitivity and specificity. Results: Twenty-four studies were identified, involving 26 565 participants: eight studies in high-risk populations (at least one cardiovascular risk factor), 12 studies in general populations, and four in both high-risk and general populations combined. For detecting LVSD in screened high-risk populations with N-terminal prohormone brain natriuretic peptide (NT-proBNP), the pooled sensitivity was 0.87 [95% confidence interval (CI) 0.73\u20130.94] and specificity 0.84 (95% CI 0.55\u20130.96); for BNP, sensitivity was 0.75 (95% CI 0.65\u20130.83) and specificity 0.78 (95% CI 0.72\u20130.84). Heterogeneity between studies was high with variations in positivity threshold. Due to a paucity of high-risk studies that assessed NP performance at multiple thresholds, it was not possible to calculate optimal thresholds for LVSD screening in high-risk populations alone. To provide an indication of where the positivity threshold might lie, the pooled accuracy for LVSD screening in high-risk and general community populations were combined and gave an optimal cut-off of 311\u00a0pg/mL [sensitivity 0.74 (95% CI 0.53\u20130.88), specificity 0.85 (95% CI 0.68\u20130.93)] for NT-proBNP and 49\u00a0pg/mL [sensitivity 0.68 (95% CI 0.45\u20130.85), specificity 0.81 (0.67\u20130.90)] for BNP. Conclusions: Our findings suggest that in high-risk community populations NP screening may accurately detect LVSD, potentially providing an important opportunity for diagnosis and early intervention. Our study highlights an urgent need for further prospective studies, as well as an individual participant data meta-analysis, to more precisely evaluate diagnostic accuracy and identify optimal screening thresholds in specifically defined community-based populations to inform future guideline recommendations.
\n \n\n \n \nBackground Pharmacotherapies for smoking cessation increase the likelihood of achieving abstinence in a quit attempt. It is plausible that providing support, or, if support is offered, offering more intensive support or support including particular components may increase abstinence further. Objectives To evaluate the effect of adding or increasing the intensity of behavioural support for people using smoking cessation medications, and to assess whether there are different effects depending on the type of pharmacotherapy, or the amount of support in each condition. We also looked at studies which directly compare behavioural interventions matched for contact time, where pharmacotherapy is provided to both groups (e.g. tests of different components or approaches to behavioural support as an adjunct to pharmacotherapy). Search methods We searched theCochraneTobaccoAddictionGroup SpecialisedRegister, clinicaltrials.gov, and the ICTRP in June 2018 for recordswith any mention of pharmacotherapy, including any type of nicotine replacement therapy (NRT), bupropion, nortriptyline or varenicline, that evaluated the addition of personal support or compared two or more intensities of behavioural support. Selection criteria Randomised or quasi-randomised controlled trials in which all participants received pharmacotherapy for smoking cessation and conditions differed by the amount or type of behavioural support. The intervention condition had to involve person-to-person contact (defined as face-to-face or telephone). The control condition could receive less intensive personal contact, a different type of personal contact, written information, or no behavioural support at all.We excluded trials recruiting only pregnant women and trials which did not set out to assess smoking cessation at six months or longer. Data collection and analysis For this update, screening and data extraction followed standard Cochrane methods. The main outcome measure was abstinence from smoking after at least six months of follow-up. We used the most rigorous definition of abstinence for each trial, and biochemicallyvalidated rates, if available. We calculated the risk ratio (RR) and 95% confidence interval (CI) for each study. Where appropriate, we performed meta-analysis using a random-effects model.Main results Eighty-three studies, 36 of which were new to this update, met the inclusion criteria, representing 29,536 participants. Overall, we judged 16 studies to be at low risk of bias and 21 studies to be at high risk of bias. All other studies were judged to be at unclear risk of bias. Results were not sensitive to the exclusion of studies at high risk of bias. We pooled all studies comparing more versus less support in the main analysis. Findings demonstrated a benefit of behavioural support in addition to pharmacotherapy. When all studies of additional behavioural therapy were pooled, there was evidence of a statistically significant benefit from additional support (RR 1.15, 95% CI 1.08 to 1.22, I\ufffd = 8%, 65 studies, n = 23,331) for abstinence at longest follow-up, and this effect was not different when we compared subgroups by type of pharmacotherapy or intensity of contact. This effect was similar in the subgroup of eight studies in which the control group received no behavioural support (RR 1.20, 95% CI 1.02 to 1.43, I2 = 20%, n = 4,018). Seventeen studies compared interventions matched for contact time but that differed in terms of the behavioural components or approaches employed. Of the 15 comparisons, all had small numbers of participants and events. Only one detected a statistically significant effect, favouring a health education approach (which the authors described as standard counselling containing information and advice) over motivational interviewing approach (RR 0.56, 95% CI 0.33 to 0.94, n = 378). Authors' conclusions There is high-certainty evidence that providing behavioural support in person or via telephone for people using pharmacotherapy to stop smoking increases quit rates. Increasing the amount of behavioural support is likely to increase the chance of success by about 10% to 20%, based on a pooled estimate from 65 trials. Subgroup analysis suggests that the incremental benefit from more support is similar over a range of levels of baseline support.More research is needed to assess the effectiveness of specific components that comprise behavioural support.
\n \n\n \n \nPurpose: To investigate the effect of increasing navigation speed on the visual search and decision making during polyp identification for computed tomography (CT) colonography Materials and Methods: Institutional review board permission was obtained to use deidentified CT colonography data for this prospective reader study. After obtaining informed consent from the readers, 12 CT colonography fly-through examinations that depicted eight polyps were presented at four different fixed navigation speeds to 23 radiologists. Speeds ranged from 1 cm/sec to 4.5 cm/sec. Gaze position was tracked by using an infrared eye tracker, and readers indicated that they saw a polyp by clicking a mouse. Patterns of searching and decision making by speed were investigated graphically and by multilevel modeling. Results: Readers identified polyps correctly in 56 of 77 (72.7%) of viewings at the slowest speed but in only 137 of 225 (60.9%) of viewings at the fastest speed (P = .004). They also identified fewer false-positive features at faster speeds (42 of 115; 36.5%) of videos at slowest speed, 89 of 345 (25.8%) at fastest, P = .02). Gaze location was highly concentrated toward the central quarter of the screen area at faster speeds (mean gaze points at slowest speed vs fastest speed, 86% vs 97%, respectively). Conclusion: Faster navigation speed at endoluminal CT colonography led to progressive restriction of visual search patterns. Greater speed also reduced both true-positive and falsepositive colorectal polyp identification.
\n \n\n \n \nIntroduction: Previous studies suggest that many systematic reviews contain meta-analyses that display temporal trends, such as the first study's result being more extreme than later studies' or a drift in the pooled estimate. We assessed the extent and characteristics of temporal trends using all Cochrane intervention reports published 2008-2012. Methods: We selected the largest meta-analysis within each report and analysed trends using methods including a Z-test (first versus subsequent estimates); generalised least squares; and cumulative sum charts. Predictors considered include meta-analysis size and review group. Results: Of 1288 meta-analyses containing at least 4 studies, the point estimate from the first study was more extreme and in the same direction as the pooled estimate in 738 (57%), with a statistically significant difference (first versus subsequent) in 165 (13%). Generalised least squares indicated trends in 717 (56%); 18% of fixed effects analyses had at least one violation of cumulative sum limits. For some methods, meta-analysis size was associated with temporal patterns and use of a random effects model, but there was no consistent association with review group. Conclusions: All results suggest that more meta-analyses demonstrate temporal patterns than would be expected by chance. Hence, assuming the standard meta-analysis model without temporal trend is sometimes inappropriate. Factors associated with trends are likely to be context specific.
\n \n\n \n \n