Found 10864 matches for
Aims To develop a computer processable algorithm, capable of running automated searches of routine data that flag miscoded and misclassified cases of diabetes for subsequent clinical review. Method Anonymized computer data from the Quality Improvement in Chronic Kidney Disease (QICKD) trial (n=942031) were analysed using a binary method to assess the accuracy of data on diabetes diagnosis. Diagnostic codes were processed and stratified into: definite, probable and possible diagnosis of Type 1 or Type 2 diabetes. Diagnostic accuracy was improved by using prescription compatibility and temporally sequenced anthropomorphic and biochemical data. Bayesian false detection rate analysis was used to compare findings with those of an entirely independent and more complex manual sort of the first round QICKD study data (n=760588). Results The prevalence of definite diagnosis of Type 1 diabetes and Type 2 diabetes were 0.32% and 3.27% respectively when using the binary search method. Up to 35% of Type 1 diabetes and 0.1% of Type 2 diabetes were miscoded or misclassified on the basis of age/BMI and coding. False detection rate analysis demonstrated a close correlation between the new method and the published hand-crafted sort. Both methods had the highest false detection rate values when coding, therapeutic, anthropomorphic and biochemical filters were used (up to 90% for the new and 75% for the hand-crafted search method). Conclusions A simple computerized algorithm achieves very similar results to more complex search strategies to identify miscoded and misclassified cases of both Type 1 diabetes and Type 2 diabetes. It has the potential to be used as an automated audit instrument to improve quality of diabetes diagnosis. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement
The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.
The 'rule of halves' still applies to the management of cholesterol in cardiovascular disease: 2002-2005
The current national target in the UK for total cholesterol is 5 mmol/L. The Primary Care Data Quality (PCDQ) programme reported in 2002 that only 50% of patients with coronary heart disease (CHD) achieved the 5 mmol/L target and we report on progress since then. Routinely collected general practice computer data were extracted in two successive data collections in 2003 and 2004/05 and analysed. The standardised prevalence of CHD recorded in GP computer systems rose from 3.8% to 4.0% from 2002 to 2004/5. In patients with CHD, cholesterol recording rose from 47.6% to 89.0%, the percentage of patients receiving a statin rose from 49.4% to 71.5% and mean cholesterol levels fell from 5.18 to 4.67 mmol/L. The proportion of CHD patients with a cholesterol recording achieving the 5 mmol/L target increased from 44.7% to 67.7%. Overall, 53.1 % of patients with cardiovascular disease had total cholesterol below 5 mmol/L. Patients with CHD achieved better cholesterol control than those with stroke (4.87 mmol/L) or peripheral vascular disease (PVD) (4.79 mmol/L) and a higher percentage of patients achieved the 5 mmol/L target (60.1% versus 43.3% and 49.9% respectively). There remains scope for improved management of cholesterol in primary care and greater efforts are needed to see that more patients with cardiovascular disease benefit from best practice.
A knowledge audit of the managers of primary care organizations: Top priority is how to use routinely collected clinical data for quality improvement
Technology has provided improved access to the rapidly expanding evidence base and to computerized clinical data recorded as part of routine care. A knowledge audit identifies from within this mass of information the knowledge requirements of a professional group or organization, enabling implementation of an appropriately tailored knowledge-management strategy. The objective of the study is to describe perceived knowledge gaps and recommend an appropriate knowledge-management strategy for primary care. The sample comprised 18 senior managers of Primary Care Trusts: the Chairman, Chief Executive Officer, or Research and Development Lead. A series of interviews were recorded verbatim, transcribed and analysed. Knowledge requirements were broad, suggesting that a broadly based knowledge-management strategy is needed in primary care. The biggest gap in current knowledge identified is how to perform needs assessment and quality improvement using aggregated routinely collected, general practice computer data. © 2005 Taylor & Francis Group Ltd.
A Class Comparison of Medication Persistence in People with Type 2 Diabetes: A Retrospective Observational Study
© 2018, The Author(s). Introduction: Longer medication persistence in type 2 diabetes (T2D) is associated with improved glycaemic control. It is not clear which oral therapies have the best persistence. The objective of this study was to compare medication persistence across different oral therapies in people with T2D. Methods: We performed a retrospective cohort analysis using a primary-care-based population, the Royal College of General Practitioners Research and Surveillance Centre cohort. We identified new prescriptions for oral diabetes medication in people with type 2 diabetes between January 1, 2004 and July 31, 2015. We compared median persistence across each class. We also compared non-persistence (defined as a prescription gap of ≥ 90 days) between classes, adjusting for confounders, using Cox regression. Confounders included: age, gender, ethnicity, socioeconomic status, alcohol use, smoking status, glycaemic control, diabetes duration, diabetes complications, comorbidities, and number of previous and concurrent diabetes medications. Results: We identified 60,327 adults with T2D. The majority 42,810 (70.9%) of those had one or more oral medications prescribed; we measured persistence in those patients (who were prescribed 55,728 oral medications in total). Metformin had the longest median persistence (3.04 years; 95% CI 2.94–3.12). The adjusted hazard ratios for non-persistence compared with metformin were: sulfonylureas HR 1.20 (1.16–1.24), DPP-4 inhibitors HR 1.43 (1.38–1.49), thiazolidinediones HR 1.71 (95% CI 1.64–1.77), SGLT2 inhibitors HR 1.04 (0.93–1.17), meglitinides HR 2.25 (1.97–2.58), and alpha-glucosidase inhibitors HR 2.45 (1.98–3.02). The analysis of SGLT2 inhibitors was limited by the short duration of follow-up for this new class. Other factors associated with reduced medication persistence were female gender, younger age, and non-white ethnicity. Conclusions: Persistence is strongly influenced by medication class and should be considered when initiating treatments.
Evaluation of the effect of the herpes zoster vaccination programme 3 years after its introduction in England: a population-based study
© 2018 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND 4.0 license Background: In 2013, a herpes zoster vaccination programme was introduced in England for adults aged 70 years with a phased catch-up programme for those aged 71–79 years. We aimed to evaluate the effect of the first 3 years of the vaccination programme on incidence of herpes zoster and postherpetic neuralgia in this population. Methods: In this population-based study, we extracted data from the Royal College of General Practitioners sentinel primary care network on consultations with patients aged 60–89 years for herpes zoster and postherpetic neuralgia occurring between Oct 1, 2005, and Sept 30, 2016, obtaining data from 164 practices. We identified individual data on herpes zoster vaccinations administered and consultations for herpes zoster and postherpetic neuralgia, and aggregated these data to estimate vaccine coverage and incidence of herpes zoster and postherpetic neuralgia consultations. We defined age cohorts to identify participants targeted in each year of the programme, and as part of the routine or catch-up programme. We modelled incidence according to age, region, gender, time period, and vaccine eligibility using multivariable Poisson regression with an offset for person-years. Findings: Our analysis included 3·36 million person-years of data, corresponding to an average of 310 001 patients aged 60–89 years who were registered at an RCGP practice each year. By Aug 31, 2016, uptake of the vaccine varied between 58% for the recently targeted cohorts and 72% for the first routine cohort. Across the first 3 years of vaccination for the three routine cohorts, incidence of herpes zoster fell by 35% (incidence rate ratio 0·65 [95% 0·60–0·72]) and of postherpetic neuralgia fell by 50% (0·50 [0·38–0·67]). The equivalent reduction for the four catch-up cohorts was 33% for herpes zoster (incidence rate ratio 0·67 [0·61–0·74]) and 38% for postherpetic neuralgia (0·62 [0·50–0·79]). These reductions are consistent with a vaccine effectiveness of about 62% against herpes zoster and 70–88% against postherpetic neuralgia. Interpretation: The herpes zoster vaccination programme in England has had a population impact equivalent to about 17 000 fewer episodes of herpes zoster and 3300 fewer episodes of postherpetic neuralgia among 5·5 million eligible individuals in the first 3 years of the programme. Communication of the public health impact of this programme will be important to reverse the recent trend of declining vaccine coverage. Funding: Public Health England.
Seasonality and geographical spread of respiratory syncytial virus epidemics in 15 european countries, 2010 to 2016
© 2018, European Centre for Disease Prevention and Control (ECDC). All rights reserved. Respiratory syncytial virus (RSV) is considered the most common pathogen causing severe lower respiratory tract infections among infants and young children. We describe the seasonality and geographical spread of RSV infection in 15 countries of the European Union and European Economic Area. We performed a retrospective descriptive study of weekly laboratory-confirmed RSV detections between weeks 40/2010 and 20/2016, in patients investigated for influenza-like illness, acute respiratory infection or following the clinician’s judgment. Six countries reported 4,230 sentinel RSV laboratory diagnoses from primary care and 14 countries reported 156,188 non-sentinel laboratory diagnoses from primary care or hospitals. The median length of the RSV season based on sentinel and non-sentinel surveillance was 16 (range: 9–24) and 18 (range: 8–24) weeks, respectively. The median peak weeks for sentinel and non-sentinel detections were week 4 (range: 48 to 11) and week 4.5 (range: 49 to 17), respectively. RSV detections peaked later (r = 0.56; p = 0.0360) and seasons lasted longer with increasing latitude (r = 0.57; p = 0.0329). Our data demonstrated regular seasonality with moderate correlation between timing of the epidemic and increasing latitude of the country. This study supports the use of RSV diagnostics within influenza or other surveillance systems to monitor RSV seasonality and geographical spread.
Quality achievement and disease prevalence in primary care predicts regional variation in renal replacement therapy (RRT) incidence: An ecological study
Background. Diabetes Meillitus (DM) and hypertension (HT) are important causes of end-stage renal disease (ESRD) and renal replacement therapy (RRT) is the standard active treatment. Financially, incentivized quality initiatives for primary care include pay-for-performance (P4P) in DM and HT. Our aim was to examine any effect of disease prevalence and P4P on RRT incidence and regional variation. Methods. The incidence of RRT, sex and ethnicity data and P4P disease register and achievement data were obtained for each NHS locality. We calculated correlation coefficients for P4P indicators since 2004/05 and socio-demographic data for these 152 localities. We then developed a regression model and regression coefficient (R2) to assess to what extent these variables might predict RRT incidence. Results. Many of the P4P indicators were weakly but highly significantly correlated with RRT incidence. The strongest correlation was 2004/05 for DM prevalence and 2006/07 for HT quality. DM prevalence and the percentage with blood pressure control in HT target (HT quality) were the most predictive in our regression model R2 = 0.096 and R2 = 0.085, respectively (P < 0.001). Combined they predicted a fifth of RRT incidence (R2 = 0.2, P < 0.001) while ethnicity and deprivation a quarter (R2 = 0.25, P < 0.001). Our final model contained proportion of population >75 years, DM prevalence, HT quality, ethnicity and deprivation index and predicted 40% of variation (R2 = 0.4, P < 0.001). Conclusion. Our findings add prevalence of DM and quality of HT management to the known predictors of variation in RRT, ethnicity and deprivation. They raise the possibility that interventions in primary care might influence later events in specialist care. © 2011 The Author.
Aims To determine the effectiveness of self-audit tools designed to detect miscoding, misclassification and misdiagnosis of diabetes in primary care. Methods We developed six searches to identify people with diabetes with potential classification errors. The search results were automatically ranked from most to least likely to have an underlying problem. Eight practices with a combined population of 72000 and diabetes prevalence 2.9% (n=2340) completed audit forms to verify whether additional information within the patients' medical record confirmed or refuted the problems identified. Results The searches identified 347 records, mean 42 per practice. Pre-audit 20% (n=69) had Type 1 diabetes, 70% (n=241) had Type 2 diabetes, 9% (n=30) had vague codes that were hard to classify, 2% (n=6) were not coded and one person was labelled as having gestational diabetes. Of records, 39.2% (n=136) had important errors: 10% (n=35) had coding errors; 12.1% (42) were misclassified; and 17.0% (59) misdiagnosed as having diabetes. Thirty-two per cent (n=22) of people with Type 2 diabetes (n=69) were misclassified as having Type 1 diabetes; 20% (n=48) of people with Type 2 diabetes (n=241) did not have diabetes; of the 30 patients with vague diagnostic terms, 50% had Type 2 diabetes, 20% had Type 1 diabetes and 20% did not have diabetes. Examples of misdiagnosis were found in all practices, misclassification in seven and miscoding in six. Conclusions Volunteer practices successfully used these self-audit tools. Approximately 40% of patients identified by computer searches (5.8% of people with diabetes) had errors; misdiagnosis is commonest, misclassification may affect treatment options and miscoding in omission from disease registers and the potential for reduced quality of care. © 2012 The Authors. Diabetic Medicine © 2012 Diabetes UK.
Creatinine fluctuation has a greater effect than the formula to estimate glomerular filtration rate on the prevalence of chronic kidney disease
Background/Aims: Cases of chronic kidney disease (CKD) are defined by the estimated glomerular filtration rate (eGFR), calculated using the Modified Diet in Renal Disease (MDRD) or, more recently, the CKD Epidemiology Collaboration (CKD-EPI) formula. This study set out to promote a systematic approach to reporting CKD prevalence. Design, Setting, Participants and Measurements: The study explores the impact of the way in which eGFR is calculated on the prevalence of CKD. We took into account whether including (1) ethnicity, (2) using a single eGFR, (3) using more than 1 eGFR value or (4) using the CKD-EPI formula affected the estimates of prevalence. Sample: Of 930,997 registered patients, 36% (332,891) have their eGFR defined (63% of those aged 50-74 years, 81% >75 years). Results: The prevalence of stage 3-5 CKD is 5.41% (n = 50,331). (1) Not including ethnicity data the prevalence would be 5.49%, (2) just using the latest eGFR 6.4%, (3) excluding intermediary values 5.55% and (4) using the CKD-EPI equation 4.8%. All changes in eGFR (t test) and the proportion with CKD (χ 2 test) were significant (p < 0.001). Using serum-creatinine-calculated eGFR instead of laboratory data reduced the prevalence of stage 3-5 CKD by around 0.01%. Sixty-six percent of people with stage 3-5 disease have cardiovascular disease and 4.0% significant proteinuria using the MDRD formula; the corresponding figures using CKD-EPI are 74 and 4.6%. Conclusions: A standardised approach to reporting case finding would allow a better comparison of prevalence estimates. Using a single eGFR tends to inflate the reported prevalence of CKD by ignoring creatinine fluctuation; this effect is greater than the difference between MDRD and CKD-EPI. Copyright © 2010 S. Karger AG, Basel.
Using routinely collected data to evaluate a leaflet campaign to increase the presentation of people with memory problems to general practice: A locality based controlled study
Background The Alzheimer's Society wished to raise awareness that people with memory problems may benefit fromearly assessment and diagnosis, so that appropriate measures could be put in place and management improved. Objective To use routinely collected data to determine whether a leaflet campaign to raise awareness of memory problems would result in increased presentation of people with memory problems to their GPs. Method A locality was identified which met the criteria for locating the pilot intervention. A neighbouring locality was identified which used the same secondary care service and could serve as a comparator. Anonymised routinely collected computer data were gathered before and after the intervention. Results The intervention locality had a much greater proportion of elderly patients and a higher proportion had memory problems recorded at baseline (OR 1.67; 95% CI 1.47-1.91; P<0.001). In both localities just under 40% of people with memory problems had blood tests. Approximately 80% would be referred to secondary care, and this was more likely for those in the intervention group (OR 1.29; 95% CI 0.99-1.93; P=0.044). However, the use of antidepressants was greater in the control locality; 34% vs 9% (OR 0.19; 95% CI 0.13-0.27; P<0.001). Whilst the absolute number of people prescribed cholinesterase inhibitors was greater and increased more in the intervention practices, the proportion of people with memory problems prescribed was not significantly greater (OR 1.21; 95% CI 0.77 - 1.89; P=0.38). The increased prescribing in the intervention practices was due to people restarting therapy. From a lower baseline there was a greater increase in the control locality for all variables for which we had a before and after measure. Conclusions During a leaflet campaign the recording and management ofmemory problems increased. However, there was greater improvement in the control locality. This study demonstrates the importance of including a control group and the strengths of routine primary care data. © 2010 PHCSG, British Computer Society.
Management of heart failure in primary care after implementation of the National Service Framework for Coronary Heart Disease: A cross-sectional study
Objectives: To compare the management of heart failure with the standards set out in the National Service Framework for Coronary Heart Disease. Study design. A cross-sectional study in 26 general practices, with a combined list size of 256,188, that are members of the Kent, Surrey and Sussex Primary Care Research Network. Methods. Information was extracted on the management of 2129 patients with heart failure, of whom 2097 were aged 45 years and over. Results. The prevalence of heart failure was 8.3 per 1000. Prevalence rates increased with age, from 0.2 per 1000 in people aged under 35 years of age to 125 per 1000 in those aged 85 years and over. Coronary heart disease (present in 47%) was the most common comorbid condition in men with heart failure, whereas hypertension (present in 46%) was the most common condition in women. Recording of cardiovascular risk factors was generally higher in younger patients than in older patients, and in men than in women. Blood pressure (92% of men and 90% of women) and smoking status (84% of men and 77% of women) were generally the best-recorded cardiovascular risk factors. Blood electrolytes were recorded in about 83% of men and 75% of women. Only 17% of men and 11% of women with heart failure had a record of undergoing an echocardiogram. Use of angiotensin-converting enzyme (ACE) inhibitors or antagonists was 76% in men with heart failure and 68% in women; lowest rates were seen in older patients. Uptake of influenza immunization was generally high, at 85% in men and 84% in women. Conclusions. The use of ACE inhibitors in patients with heart failure was higher than in some previous studies. However, many patients have no documentation in their computerized medical records of having undergone key investigations, such as echocardiography. © 2004 The Royal Institute of Public Health. Published by Elsevier Ltd. All rights reserved.
Determinants of inter-practice variation in ADHD diagnosis and stimulant prescribing: Cross-sectional database study of a national surveillance network
© Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY. Published by BMJ. Early recognition, identification and treatment of children with attention deficit hyperactivity disorder (ADHD) can reduce detrimental outcomes and redirect their developmental trajectory. We aimed to describe variations in age of ADHD diagnosis and stimulant prescribing among general practitioner practices in a nationwide network and identify child, parental, household and general practice factors that might account for these variations. Cross-sectional study of children aged under 19 years registered within a general practice in the Royal College of General Practitioners (RCGP) Research and Surveillance Centre (RSC) network in 2016, RCGP RSC has a household key allowing parent and child details to be linked. Data from 158 general practices and 353 774 children under 19 were included. The mean age of first ADHD diagnosis was 10.5 years (95% CI 10.1 to 10.9, median 10, IQR 9.0-11.9) and the mean percentage of children with ADHD prescribed stimulant medications among RCGP RSC practices was 41.2% (95% CI 38.7 to 43.6). There was wide inter-practice variation in the prevalence of diagnosis of ADHD, the age of diagnosis and stimulant prescribing. ADHD diagnosis is more likely to be made later in households with a greater number of children and with a larger age difference between adults and children. Stimulant prescribing for children with ADHD was higher in less deprived practices. Older parents and families with more children fail to recognise ADHD and may need more support. Practices in areas of higher socio-economic status are associated with greater prescribing of stimulants for children with ADHD.
Obstructive sleep apnoea in Type 2 diabetes mellitus: increased risk for overweight as well as obese people included in a national primary care database analysis
© 2019 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK Aims: To determine obstructive sleep apnoea prevalence in people with Type 2 or Type 1 diabetes in a national primary care setting, stratified by BMI category, and to explore the relationship between patient characteristics and obstructive sleep apnoea. Methods: Using the Royal College of General Practitioners Research and Surveillance Centre database, a cross-sectional analysis was conducted. Diabetes type was identified using a seven-step algorithm and was grouped by Type 2 diabetes, Type 1 diabetes and no diabetes. The clinical characteristics of these groups were analysed, BMI-stratified obstructive sleep apnoea prevalence rates were calculated, and a multilevel logistic regression analysis was completed on the Type 2 diabetes group. Results: Analysis of 1 275 461 adult records in the Royal College of General Practitioners Research and Surveillance Centre network showed that obstructive sleep apnoea was prevalent in 0.7%. In people with Type 2 diabetes, obstructive sleep apnoea prevalence increased with each increasing BMI category, from 0.5% in those of normal weight to 9.6% in those in the highest obesity class. By comparison, obstructive sleep apnoea prevalence rates for these BMI categories in Type 1 diabetes were 0.3% and 4.3%, and in those without diabetes 1.2% and 3.9%, respectively. Obstructive sleep apnoea was more prevalent in men than women in both diabetes types. When known risk factors were adjusted for, there were increased odds ratios for obstructive sleep apnoea in people with Type 2 diabetes in the overweight and higher BMI categories. Conclusions: Obstructive sleep apnoea was reported in people with both types of diabetes across the range of overweight categories and not simply in the highest obesity class.