Found 9912 matches for
The evaluation of anaemia in an older primary care population: retrospective population-based study.
Background: Anaemia is common in older people and the identification of potentially reversible haematinic deficiencies relies on appropriate investigation, often undertaken in primary care. Aim: To determine the laboratory prevalence of anaemia, the types of anaemia observed, and the biochemical and haematological investigations undertaken to characterise any associated haematinic abnormality in older primary care patients. Design & setting: A retrospective primary care based study of patients aged >65 years undergoing a full blood count in Oxfordshire, UK between 1 January 2012 and 31 December 2013. Method: Consecutive patients aged >65 years with a full blood count were identified retrospectively from a laboratory database. Patient demographics, number of blood tests and additional laboratory investigations requested were recorded. World Health Organisation (WHO) criteria were used to define anaemia. Results: In total 151 473 full blood counts from 53 890 participants were included: 29.6% of patients were anaemic. The majority had a normocytic anaemia (82.4%) and 46.0% of participants with anaemia had no additional investigations performed. The mean haemoglobin was lower in the anaemic group that underwent further investigation than those who did not (Hb 10.68 g/dl versus 11.24 g/dl, P<0.05): 33.2 % of patients with a microcytic anaemia (mean cell volume <80) did not have any markers of iron status measured. Conclusion: A large proportion of older adults in primary care with a recent blood test are anaemic, the majority with a normocytic anaemia, with evidence of inadequate investigation. Those with lower haemoglobin are more likely to be further investigated. Further work is needed to understand the approach to anaemia in older adults in primary care.
Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme
BACKGROUND: This report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review.OBJECTIVES: (1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment (HTA) programme.DATA SOURCES: We searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014.REVIEW METHODS: This narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015.RESULTS: The literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers.DISCUSSION: The findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence's remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework (REF), which assesses the quality of universities' research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish® (researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established.LIMITATIONS: There were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme.CONCLUSIONS: Research funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines.FUNDING: The National Institute for Health Research HTA programme.
BACKGROUND: Self-monitoring of blood pressure better predicts prognosis than clinic measurement, is popular with patients, and endorsed in hypertension guidelines. However, there is uncertainty over the optimal self-monitoring schedule. We therefore aimed to determine the optimum schedule to predict future cardiovascular events and determine "true" underlying blood pressure. METHODS: Six electronic databases were searched from November 2009 (updating a National Institute for Health and Care Excellence [NICE] systematic review) to April 2017. Studies that compared aspects of self-monitoring schedules to either prognosis or reliability/reproducibility in hypertensive adults were included. Data on study and population characteristics, self-monitoring regime, and outcomes were extracted by 2 reviewers independently. RESULTS: From 5,164 unique articles identified, 25 met the inclusion criteria. Twelve studies were included from the original NICE review, making a total of 37 studies. Increasing the number of days of measurement improved prognostic power: 72%-91% of the theoretical maximum predictive value (asymptotic maximum hazard ratio) was reached by 3 days and 86%-96% by 7 days. Increasing beyond 3 days of measurement did not result in better correlation with ambulatory monitoring. There was no convincing evidence that the timing or number of readings per day had an effect, or that ignoring the first day's measurement was necessary. CONCLUSIONS: Home blood pressure should be measured for 3 days, increased to 7 only when mean blood pressure is close to a diagnostic or treatment threshold. Other aspects of a monitoring schedule can be flexible to facilitate patient uptake of and adherence with self-monitoring.
BACKGROUND: Competitions might encourage people to undertake and/or reinforce behaviour change, including smoking cessation. Competitions involve individuals or groups having the opportunity to win a prize following successful cessation, either through direct competition or by entry into a lottery or raffle. OBJECTIVES: To determine whether competitions lead to higher long-term smoking quit rates. We also aimed to examine the impact on the population, the costs, and the unintended consequences of smoking cessation competitions. SEARCH METHODS: This review has merged two previous Cochrane reviews. Here we include studies testing competitions from the reviews 'Competitions and incentives for smoking cessation' and 'Quit & Win interventions for smoking cessation'. We updated the evidence by searching the Cochrane Tobacco Addiction Group Specialized Register in June 2018. SELECTION CRITERIA: We considered randomized controlled trials (RCTs), allocating individuals, workplaces, groups within workplaces, or communities to experimental or control conditions. We also considered controlled studies with baseline and post-intervention measures in which participants were assigned to interventions by the investigators. Participants were smokers, of any age and gender, in any setting. Eligible interventions were contests, competitions, lotteries, and raffles, to reward cessation and continuous abstinence in smoking cessation programmes. DATA COLLECTION AND ANALYSIS: For this update, data from new studies were extracted independently by two review authors. The primary outcome measure was abstinence from smoking at least six months from the start of the intervention. We performed meta-analyses to pool study effects where suitable data were available and where the effect of the competition component could be separated from that of other intervention components, and report other findings narratively. MAIN RESULTS: Twenty studies met our inclusion criteria. Five investigated performance-based reward, where groups of smokers competed against each other to win a prize (N = 915). The remaining 15 used performance-based eligibility, where cessation resulted in entry into a prize draw (N = 10,580). Five of these used Quit & Win contests (N = 4282), of which three were population-level interventions. Fourteen studies were RCTs, and the remainder quasi-randomized or controlled trials. Six had suitable abstinence data for a meta-analysis, which did not show evidence of effectiveness of performance-based eligibility interventions (risk ratio (RR) 1.16, 95% confidence interval (CI) 0.77 to 1.74, N = 3201, I2 = 57%). No trials that used performance-based rewards found a beneficial effect of the intervention on long-term quit rates.The three population-level Quit & Win studies found higher smoking cessation rates in the intervention group (4% to 16.9%) than the control group at long-term follow-up, but none were RCTs and all had important between-group differences in baseline characteristics. These studies suggested that fewer than one in 500 smokers would quit because of the contest.Reported unintended consequences in all sets of studies generally related to discrepancies between self-reported smoking status and biochemically-verified smoking status. More serious adverse events were not attributed to the competition intervention.Using the GRADE system we rated the overall quality of the evidence for smoking cessation as 'very low', because of the high and unclear risk of bias associated with the included studies, substantial clinical and methodological heterogeneity, and the limited population investigated. AUTHORS' CONCLUSIONS: At present, it is impossible to draw any firm conclusions about the effectiveness, or a lack of it, of smoking cessation competitions. This is due to a lack of well-designed comparative studies. Smoking cessation competitions have not been shown to enhance long-term cessation rates. The limited evidence suggesting that population-based Quit & Win contests at local and regional level might deliver quit rates above baseline community rates has not been tested adequately using rigorous study designs. It is also unclear whether the value or frequency of possible cash reward schedules influence the success of competitions. Future studies should be designed to compensate for the substantial biases in the current evidence base.
© 2018 S. Karger AG, Basel. Copyright: All rights reserved. Background: Placebo and nocebo effects occur in clinical or laboratory medical contexts after administration of an inert treatment or as part of active treatments and are due to psychobiological mechanisms such as expectancies of the patient. Placebo and nocebo studies have evolved from predominantly methodological research into a far-reaching interdisciplinary field that is unravelling the neurobiological, behavioural and clinical underpinnings of these phenomena in a broad variety of medical conditions. As a consequence, there is an increasing demand from health professionals to develop expert recommendations about evidence-based and ethical use of placebo and nocebo effects for clinical practice. Methods: A survey and interdisciplinary expert meeting by invitation was organized as part of the 1st Society for Interdisciplinary Placebo Studies (SIPS) conference in 2017. Twenty-nine internationally recognized placebo researchers participated. Results: There was consensus that maximizing placebo effects and minimizing nocebo effects should lead to better treatment outcomes with fewer side effects. Experts particularly agreed on the importance of informing patients about placebo and nocebo effects and training health professionals in patient-clinician communication to maximize placebo and minimize nocebo effects. Conclusions: The current paper forms a first step towards developing evidence-based and ethical recommendations about the implications of placebo and nocebo research for medical practice, based on the current state of evidence and the consensus of experts. Future research might focus on how to implement these recommendations, including how to optimize conditions for educating patients about placebo and nocebo effects and providing training for the implementation in clinical practice.
© Springer Science+Business Media Dordrecht 2017. The supposed superiority of randomized over non-randomized studies is used to justify claims about therapeutic effectiveness of medical interventions and also inclusion criteria for many systematic reviews of therapeutic interventions. However, the view that randomized trials provide better evidence has been challenged by philosophers of science. In addition, empirical evidence for average differences between randomized trials and observational studies (which we would expect if one method were superior) has proven difficult to find. This chapter reviews the controversy surrounding the relative merits of randomized trials and observational studies. It is concluded that while (well-conducted) observational can often provide the same level of evidential support as randomized trials, merits of (well-conducted) randomized trials warrant claims about their superiority, especially where results from the two methods are contradictory.
Effects of empathic and positive communication in healthcare consultations: a systematic review and meta-analysis
© 2018, The Royal Society of Medicine. Background: Practitioners who enhance how they express empathy and create positive expectations of benefit could improve patient outcomes. However, the evidence in this area has not been recently synthesised. Objective: To estimate the effects of empathy and expectations interventions for any clinical condition. Design: Systematic review and meta-analysis of randomised trials. Data sources: Six databases from inception to August 2017. Study selection: Randomised trials of empathy or expectations interventions in any clinical setting with patients aged 12 years or older. Review methods: Two reviewers independently screened citations, extracted data, assessed risk of bias and graded quality of evidence using GRADE. Random effects model was used for meta-analysis. Results: We identified 28 eligible (n = 6017). In seven trials, empathic consultations improved pain, anxiety and satisfaction by a small amount (standardised mean difference −0.18 [95% confidence interval −0.32 to −0.03]). Twenty-two trials tested the effects of positive expectations. Eighteen of these (n = 2014) reported psychological outcomes (mostly pain) and showed a modest benefit (standardised mean difference −0.43 [95% confidence interval −0.65 to −0.21]); 11 (n = 1790) reported physical outcomes (including bronchial function/ length of hospital stay) and showed a small benefit (standardised mean difference −0.18 [95% confidence interval −0.32 to −0.05]). Within 11 trials (n = 2706) assessing harms, there was no evidence of adverse effects (odds ratio 1.04; 95% confidence interval 0.67 to 1.63). The risk of bias was low. The main limitations were difficulties in blinding and high heterogeneity for some comparisons. Conclusions: Greater practitioner empathy or communication of positive messages can have small patient benefits for a range of clinical conditions, especially pain. Protocol registration: Cochrane Database of Systematic Reviews (protocol) DOI: 10.1002/14651858.CD011934.pub2.
All forms of Brexit are bad for health, but some are worse than others. This paper builds on our analysis using the WHO health system building blocks framework to assess the likely effects of Brexit on the NHS in the UK. We consider four possible futures: (1) a “No Deal” Brexit under which the UK leaves the EU on 29 March 2019 without any formal agreement on the terms of withdrawal; (2) the Withdrawal Agreement, as negotiated between the UK and EU and awaiting (possible) formal agreement, which provides a transition period until the end of December 2020; (3) if the Northern Ireland Protocol’s ‘Backstop’ comes into effect after the end of that period; and (4) the Political Declaration on the Future Relationship between the UK and the EU. Our analysis shows that a No Deal Brexit is significantly worse for the NHS than a future involving the Withdrawal Agreement, which provides certainty and continuity in legal relations while the Future Relationship is negotiated and put into legal form. The Northern Ireland ‘Backstop’ has variable impact, with continuity in some areas, such as health products, but no continuity in others. The Political Declaration envisages a future relationship which is centred around a free trade agreement, in which wider health-related issues are largely absent. All forms of Brexit, however, involve negative repercussions for the UK’s leadership and governance of health, both in Europe and globally, and significant harmful consequences for the ability of parliament and other stakeholders to scrutinize and oversee governmental actions.
Do doctors in dispensing practices with a financial conflict of interest prescribe more expensive drugs? A cross-sectional analysis of English primary care prescribing data
© Author(s) (or their employer(s)) 2019. Objectives Approximately one in eight practices in primary care in England are 'dispensing practices' with an in-house dispensary providing medication directly to patients. These practices can generate additional income by negotiating lower prices on higher cost drugs, while being reimbursed at a standard rate. They, therefore, have a potential financial conflict of interest around prescribing choices. We aimed to determine whether dispensing practices are more likely to prescribe high-cost options for four commonly prescribed classes of drug where there is no evidence of superiority for high-cost options. Design A list was generated of drugs with high acquisition costs that were no more clinically effective than those with the lowest acquisition costs, for all four classes of drug examined. Data were obtained prescribing of statins, proton pump inhibitors (PPIs), angiotensin receptor blockers (ARBs) and ACE inhibitors (ACEis). Logistic regression was used to calculate ORs for prescribing high-cost options in dispensing practices, adjusting for Index of Multiple Deprivation score, practice list size and the number of doctors at each practice. Setting English primary care. Participants All general practices in England. Main outcome measures Mean cost per dose was calculated separately for dispensing and non-dispensing practices. Dispensing practices can vary in the number of patients they dispense to; we, therefore, additionally compared practices with no dispensing patients, low, medium and high proportions of dispensing patients. Total cost savings were modelled by applying the mean cost per dose from non-dispensing practices to the number of doses prescribed in dispensing practices. Results Dispensing practices were more likely to prescribe high-cost drugs across all classes: statins adjusted OR 1.51 (95% CI 1.49 to 1.53, p<0.0001), PPIs OR 1.11 (95% CI 1.09 to 1.13, p<0.0001), ACEi OR 2.58 (95% CI 2.46 to 2.70, p<0.0001), ARB OR 5.11 (95% CI 5.02 to 5.20, p<0.0001). Mean cost per dose in pence was higher in dispensing practices (statins 7.44 vs 6.27, PPIs 5.57 vs 5.46, ACEi 4.30 vs 4.24, ARB 11.09 vs 8.19). For all drug classes, the more dispensing patients a practice had, the more likely it was to issue a prescription for a high-cost option. Total cost savings in England available from all four classes are £628 875 per month or £7 546 502 per year. Conclusions Doctors in dispensing practices are more likely to prescribe higher cost drugs. This is the largest study ever conducted on dispensing practices, and the first contemporary research suggesting some UK doctors respond to a financial conflict of interest in treatment decisions. The reimbursement system for dispensing practices may generate unintended consequences. Robust routine audit of practices prescribing higher volumes of unnecessarily expensive drugs may help reduce costs.
Development and validation of QDiabetes-2018 risk prediction algorithm to estimate future risk of type 2 diabetes: cohort study
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions. Objectives To derive and validate updated QDiabetes-2018 prediction algorithms to estimate the 10 year risk of type 2 diabetes in men and women, taking account of potential new risk factors, and to compare their performance with current approaches.Design Prospective open cohort study.Setting Routinely collected data from 1457 general practices in England contributing to the QResearch database: 1094 were used to develop the scores and a separate set of 363 were used to validate the scores.Participants 11.5 million people aged 25-84 and free of diabetes at baseline: 8.87 million in the derivation cohort and 2.63 million in the validation cohort.Methods Cox proportional hazards models were used in the derivation cohort to derive separate risk equations in men and women for evaluation at 10 years. Risk factors considered included those already in QDiabetes (age, ethnicity, deprivation, body mass index, smoking, family history of diabetes in a first degree relative, cardiovascular disease, treated hypertension, and regular use of corticosteroids) and new risk factors: atypical antipsychotics, statins, schizophrenia or bipolar affective disorder, learning disability, gestational diabetes, and polycystic ovary syndrome. Additional models included fasting blood glucose and glycated haemoglobin (HBA1c). Measures of calibration and discrimination were determined in the validation cohort for men and women separately and for individual subgroups by age group, ethnicity, and baseline disease status.Main outcome measure Incident type 2 diabetes recorded on the general practice record.Results In the derivation cohort, 178 314 incident cases of type 2 diabetes were identified during follow-up arising from 42.72 million person years of observation. In the validation cohort, 62 326 incident cases of type 2 diabetes were identified from 14.32 million person years of observation. All new risk factors considered met our model inclusion criteria. Model A included age, ethnicity, deprivation, body mass index, smoking, family history of diabetes in a first degree relative, cardiovascular disease, treated hypertension, and regular use of corticosteroids, and new risk factors: atypical antipsychotics, statins, schizophrenia or bipolar affective disorder, learning disability, and gestational diabetes and polycystic ovary syndrome in women. Model B included the same variables as model A plus fasting blood glucose. Model C included HBA1c instead of fasting blood glucose. All three models had good calibration and high levels of explained variation and discrimination. In women, model B explained 63.3% of the variation in time to diagnosis of type 2 diabetes (R2), the D statistic was 2.69 and the Harrell's C statistic value was 0.89. The corresponding values for men were 58.4%, 2.42, and 0.87. Model B also had the highest sensitivity compared with current recommended practice in the National Health Service based on bands of either fasting blood glucose or HBA1c. However, only 16% of patients had complete data for blood glucose measurements, smoking, and body mass index.Conclusions Three updated QDiabetes risk models to quantify the absolute risk of type 2 diabetes were developed and validated: model A does not require a blood test and can be used to identify patients for fasting blood glucose (model B) or HBA1c (model C) testing. Model B had the best performance for predicting 10 year risk of type 2 diabetes to identify those who need interventions and more intensive follow-up, improving on current approaches. Additional external validation of models B and C in datasets with more completely collected data on blood glucose would be valuable before the models are used in clinical practice.
Development and validation of QMortality risk prediction algorithm to estimate short term risk of death and assess frailty: cohort study
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions. Objectives To derive and validate a risk prediction equation to estimate the short term risk of death, and to develop a classification method for frailty based on risk of death and risk of unplanned hospital admission.Design Prospective open cohort study.Participants Routinely collected data from 1436 general practices contributing data to QResearch in England between 2012 and 2016. 1079 practices were used to develop the scores and a separate set of 357 practices to validate the scores. 1.47 million patients aged 65-100 years were in the derivation cohort and 0.50 million patients in the validation cohort.Methods Cox proportional hazards models in the derivation cohort were used to derive separate risk equations in men and women for evaluation of the risk of death at one year. Risk factors considered were age, sex, ethnicity, deprivation, smoking status, alcohol intake, body mass index, medical conditions, specific drugs, social factors, and results of recent investigations. Measures of calibration and discrimination were determined in the validation cohort for men and women separately and for each age and ethnic group. The new mortality equation was used in conjunction with the existing QAdmissions equation (which predicts risk of unplanned hospital admission) to classify patients into frailty groups.Main outcome measure The primary outcome was all cause mortality.Results During follow-up 180 132 deaths were identified in the derivation cohort arising from 4.39 million person years of observation. The final model included terms for age, body mass index, Townsend score, ethnic group, smoking status, alcohol intake, unplanned hospital admissions in the past 12 months, atrial fibrillation, antipsychotics, cancer, asthma or chronic obstructive pulmonary disease, living in a care home, congestive heart failure, corticosteroids, cardiovascular disease, dementia, epilepsy, learning disability, leg ulcer, chronic liver disease or pancreatitis, Parkinson's disease, poor mobility, rheumatoid arthritis, chronic kidney disease, type 1 diabetes, type 2 diabetes, venous thromboembolism, anaemia, abnormal liver function test result, high platelet count, visited doctor in the past year with either appetite loss, unexpected weight loss, or breathlessness. The model had good calibration and high levels of explained variation and discrimination. In women, the equation explained 55.6% of the variation in time to death (R2), and had very good discrimination-the D statistic was 2.29, and Harrell's C statistic value was 0.85. The corresponding values for men were 53.1%, 2.18, and 0.84. By combining predicted risks of mortality and unplanned hospital admissions, 2.7% of patients (n=13 665) were classified as severely frail, 9.4% (n=46 770) as moderately frail, 43.1% (n=215 253) as mildly frail, and 44.8% (n=223 790) as fit.Conclusions We have developed new equations to predict the short term risk of death in men and women aged 65 or more, taking account of demographic, social, and clinical variables. The equations had good performance on a separate validation cohort. The QMortality equations can be used in conjunction with the QAdmissions equations, to classify patients into four frailty groups (known as QFrailty categories) to enable patients to be identified for further assessment or interventions.
Risks and benefits of direct oral anticoagulants versus warfarin in a real world setting: cohort study in primary care
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions. OBJECTIVE: To investigate the associations between direct oral anticoagulants (DOACs) and risks of bleeding, ischaemic stroke, venous thromboembolism, and all cause mortality compared with warfarin.DESIGN: Prospective open cohort study.SETTING: UK general practices contributing to QResearch or Clinical Practice Research Datalink.PARTICIPANTS: 132 231 warfarin, 7744 dabigatran, 37 863 rivaroxaban, and 18 223 apixaban users without anticoagulant prescriptions for 12 months before study entry, subgrouped into 103 270 patients with atrial fibrillation and 92 791 without atrial fibrillation between 2011 and 2016.MAIN OUTCOME MEASURES: Major bleeding leading to hospital admission or death. Specific sites of bleeding and all cause mortality were also studied.RESULTS: In patients with atrial fibrillation, compared with warfarin, apixaban was associated with a decreased risk of major bleeding (adjusted hazard ratio 0.66, 95% confidence interval 0.54 to 0.79) and intracranial bleeding (0.40, 0.25 to 0.64); dabigatran was associated with a decreased risk of intracranial bleeding (0.45, 0.26 to 0.77). An increased risk of all cause mortality was observed in patients taking rivaroxaban (1.19, 1.09 to 1.29) or on lower doses of apixaban (1.27, 1.12 to 1.45). In patients without atrial fibrillation, compared with warfarin, apixaban was associated with a decreased risk of major bleeding (0.60, 0.46 to 0.79), any gastrointestinal bleeding (0.55, 0.37 to 0.83), and upper gastrointestinal bleeding (0.55, 0.36 to 0.83); rivaroxaban was associated with a decreased risk of intracranial bleeding (0.54, 0.35 to 0.82). Increased risk of all cause mortality was observed in patients taking rivaroxaban (1.51, 1.38 to 1.66) and those on lower doses of apixaban (1.34, 1.13 to 1.58).CONCLUSIONS: Overall, apixaban was found to be the safest drug, with reduced risks of major, intracranial, and gastrointestinal bleeding compared with warfarin. Rivaroxaban and low dose apixaban were, however, associated with increased risks of all cause mortality compared with warfarin.