Search results
Found 19977 matches for
We lead multidisciplinary applied research and training to rethink the way health care is delivered in general practice and across the community.
The effectiveness of mobile app usage in facilitating weight loss: An observational study
Aim: With increasing rates of global obesity and associated health issues, there is an ever-increasing need for weight management solutions to be more accessible. Mobile applications offer accessible support systems and have the potential to offer a viable and effective weight management solution as an alternative to traditional healthcare models. Objective: To evaluate the effectiveness of the SIMPLE mobile application for time-restricted eating in achieving weight loss (WL). Methods: User data were analyzed between January 2021 and January 2023. In-app activity was calculated as the proportion of active days over 12, 26 and 52 weeks. A day is considered active if it contains at least one in-app action (e.g., logging weight, food, fasting, or physical activity). Users were categorized into four in-app activity levels: inactive (in-app activity <33%), medium activity (33%–66%), high activity (66%–99%), and maximal activity (100%). Weight change among in-app activity groups was assessed at 12, 26, and 52 weeks. Results: Out of 53,482 users, a positive association was found between the use of the SIMPLE app and WL. Active app users lost more weight than their less active counterparts. Active users had a median WL of 4.20%, 5.04%, and 3.86% at 12, 26, and 52 weeks, respectively. A larger percentage of active users—up to 50.26%—achieved clinically significant WL (≥5%) when compared to inactive users. A dose-response relationship between WL and app usage was found after adjusting for gender, age, and initial Body Mass Index; a 10% increase in app activity correlated with increased WL by 0.43, 0.66 and 0.69 kg at 12, 26, and 52 weeks, respectively. Conclusions: The study demonstrates that the SIMPLE app enables effective WL directly associated with the level of app engagement. Mobile health applications offer an accessible and effective weight management solution and should be considered when supporting adults to lose weight.
The effect of workload on primary care doctors on referral rates and prescription patterns: evidence from English NHS.
This paper investigates the impact of workload pressure on primary care outcomes using a unique dataset from English general practices. Leveraging the absence of General Practitioner (GP) colleagues as an instrumental variable, we find that increased workload leads to an increase in prescription rates of antibiotics as well as in the share of assessment referrals. On the other hand, the quantity and frequency of psychotropics decreases. When there is an absence, workload is intensified mostly on GP partners, and the mode of consultation shifts toward remote interactions as a response to higher workload pressure. The effects are more pronounced for patients above 65 years-old and those in Short-staffed practices. Our study sheds light on the intricate relationship between workload pressure and patient care decisions in primary care settings.
A registered report testing the effect of sleep on Deese-Roediger-McDermott false memory: greater lure and veridical recall but fewer intrusions after sleep
Human memory is known to be supported by sleep. However, less is known about the effect of sleep on false memory, where people incorrectly remember events that never occurred. In the laboratory, false memories are often induced via the Deese-Roediger-McDermott (DRM) paradigm where participants are presented with wordlists comprising semantically related words such as nurse , hospital and sick (studied words). Subsequently, participants are likely to falsely remember that a related lure word such as doctor was presented. Multiple studies have examined whether these false memories are influenced by sleep, with contradictory results. A recent meta-analysis suggests that sleep may increase DRM false memory when short lists are used. We tested this in a registered report ( N = 488) with a 2 (Interval: Immediate versus 12 h delay) × 2 (Test Time: 9:00 versus 21:00) between-participant DRM experiment, using short DRM lists ( N = 8 words/list) and free recall as the memory test. We found an unexpected time-of-day effect such that completing free recall in the evening led to more intrusions (neither studied nor lure words). Above and beyond this time-of-day effect, the Sleep participants produced fewer intrusions than their Wake counterparts. When this was statistically controlled for, the Sleep participants falsely produced more critical lures. They also correctly recalled more studied words (regardless of intrusions). Exploratory analysis showed that these findings cannot be attributed to differences in output bias, as indexed by the number of total responses. Our overall results cannot be fully captured by existing sleep-specific theories of false memory, but help to define the role of sleep in two more general theories (Fuzzy-Trace and Activation/Monitoring theories) and suggest that sleep may benefit gist abstraction/spreading activation on one hand and memory suppression/source monitoring on the other.
Communicating treatment options to older patients with advanced kidney disease: a conversation analysis study
Background: Choosing to have dialysis or conservative kidney management is often challenging for older people with advanced kidney disease. While we know that clinical communication has a major impact on patients’ treatment decision-making, little is known about how this occurs in practice. The OSCAR study (Optimising Staff-Patient Communication in Advanced Renal disease) aimed to identify how clinicians present kidney failure treatment options in consultations with older patients and the implications of this for patient engagement. Methods: An observational, multi-method study design was adopted. Outpatient consultations at four UK renal units were video-recorded, and patients completed a post-consultation measure of shared decision-making (SDM-Q-9). Units were sampled according to variable rates of conservative management. Eligible patients were ≥ 65 years old with an eGFR of ≤ 20 mls/min/1.73m2 within the last 6 months. Video-recordings were screened to identify instances where clinicians presented both dialysis and conservative management. These instances were transcribed in fine-grained detail and recurrent practices identified using conversation-analytic methods, an empirical, observational approach to studying language and social interaction. Results: 110 outpatient consultations were recorded (105 video, 5 audio only), involving 38 clinicians (doctors and nurses) and 94 patients: mean age 77 (65–97); 61 males/33 females; mean eGFR 15 (range 4–23). There were 21 instances where clinicians presented both dialysis and conservative management. Two main practices were identified: (1) Conservative management and dialysis both presented as the main treatment options; (2) Conservative management presented as a subordinate option to dialysis. The first practice was less commonly used (6 vs. 15 cases), but associated with more opportunities in the conversation for patients to ask questions and share their perspective, through which they tended to evaluate conservative management as an option that was potentially personally relevant. This practice was also associated with significantly higher post-consultation ratings of shared decision-making among patients (SDM-Q-9 median total score 24 vs. 37, p = 0.041). Conclusions: Presenting conservative management and dialysis as on an equal footing enables patient to take a more active role in decision-making. Findings should inform clinical communication skills training and education. Clinical trial number: No trial number as this is not a clinical trial.
Underlying disease risk among patients with fatigue: a population-based cohort study in primary care.
BACKGROUND: Presenting to primary care with fatigue is associated with a wide range of conditions, including cancer, although their relative likelihood is unknown. AIM: To quantify associations between new-onset fatigue presentation and subsequent diagnosis of various diseases, including cancer. DESIGN AND SETTING: A cohort study of patients presenting in English primary care with new-onset fatigue during 2007-2017 (the fatigue group) compared with patients who presented without fatigue (the non-fatigue group), using Clinical Practice Research Datalink data linked to hospital episodes and national cancer registration data. METHOD: The excess short-term incidence of 237 diseases in patients who presented with fatigue compared with those who did not present with fatigue is described. Disease-specific 12-month risk by sex was modelled and the age-adjusted risk calculated. RESULTS: The study included 304 914 people in the fatigue group and 423 671 in the non-fatigue group. In total, 127 of 237 diseases studied were more common in men who presented with fatigue than in men who did not, and 151 were more common in women who presented with fatigue. Diseases that were most strongly associated with fatigue included: depression; respiratory tract infections; insomnia and sleep disturbances; and hypo/hyperthyroidism (women only). By age 80 years, cancer was the third most common disease and had the fourth highest absolute excess risk in men who presented with fatigue (fatigue group: 7.01%, 95% confidence interval [CI] = 6.54 to 7.51; non-fatigue group: 3.36%, 95% CI = 3.08 to 3.67; absolute excess risk 3.65%). In women, cancer remained relatively infrequent; by age 80 years it had the thirteenth highest excess risk in patients who presented with fatigue. CONCLUSION: This study ranked the likelihood of possible diagnoses in patients who presented with fatigue, to inform diagnostic guidelines and doctors' decisions. Age-specific findings support recommendations to prioritise cancer investigation in older men (aged ≥70 years) with fatigue, but not in women at any age, based solely on the presence of fatigue.
Self-monitoring blood pressure in pregnancy: evaluation of women's experiences of the BUMP trials
BACKGROUND: The COVID-19 pandemic accelerated the adoption of remote care, or telemedicine, in many clinical areas including maternity care. One component of remote care, the use of self-monitoring of blood pressure in pregnancy, could form a key component in post-pandemic care pathways. The BUMP trials evaluated a self-monitoring of blood pressure intervention in addition to usual care, testing whether it improved detection or control of hypertension for pregnant people at risk of hypertension or with hypertension during pregnancy. This paper reports the qualitative evaluation which aimed to understand how the intervention worked, the perspectives of participants in the trials, and, crucially, those who declined to participate. METHODS: The BUMP trials were conducted between November 2018 and May 2020. Thirty-nine in-depth qualitative interviews were carried out with a diverse sample of pregnant women invited to participate in the BUMP trials across five maternity units in England. RESULTS: Self-monitoring of blood pressure in the BUMP trials was reassuring, acceptable, and convenient and sometimes alerted women to raised BP. While empowering, taking a series of self-monitored readings also introduced uncertainty and new responsibility. Some declined to participate due to a range of concerns. In the intervention arm, the performance of the BUMP intervention may have been impacted by women's selective or delayed reporting of raised readings and repeated testing in pursuit of normal BP readings. In the usual care arm, more women were already self-monitoring their blood pressure than expected. CONCLUSIONS: The BUMP trials did not find that among pregnant individuals at higher risk of preeclampsia, blood pressure self-monitoring with telemonitoring led to significantly earlier clinic-based detection of hypertension nor improved management of blood pressure. The findings from this study help us understand the role that self-monitoring of blood pressure can play in maternity care pathways. As maternity services consider the balance between face-to-face and remote consultations in the aftermath of the COVID-19 pandemic, these findings contribute to the evidence base needed to identify optimal, effective, and equitable approaches to self-monitoring of blood pressure.
Temporal trends and practice variation of paediatric diagnostic tests in primary care: retrospective analysis of 14 million tests
OBJECTIVE: The primary objective was to investigate temporal trends and between-practice variability of paediatric test use in primary care. METHODS AND ANALYSIS: This was a descriptive study of population-based data from Clinical Practice Research Datalink Aurum primary care consultation records from 1 January 2007 to 31 December 2019. Children aged 0-15 who were registered to one of the eligible 1464 general practices and had a diagnostic test code in their clinical record were included. The primary outcome measures were (1) temporal changes in test rates measured by the average annual percent change, stratified by test type, gender, age group and deprivation level and (2) practice variability in test use, measured by the coefficient of variation. RESULTS: 14 299 598 diagnostic tests were requested over 27.8 million child-years of observation for 2 542 101 children. Overall test use increased by 3.6%/year (95% CI 3.4 to 3.8%) from 399/1000 child-years to 608/1000 child-years, driven by increases in blood tests (8.0%/year, 95% CI 7.7 to 8.4), females aged 11-15 (4.0%/year, 95% CI 3.7 to 4.3), and children from the most socioeconomically deprived group (4.4% /year, 95% CI 4.1 to 4.8). Tests subject to the greatest temporal increases were faecal calprotectin, fractional exhaled nitric oxide and vitamin D. Tests classified as high-use and high-practice variability were iron studies, coeliac testing, vitamin B12, folate, and vitamin D. CONCLUSIONS: In this first nationwide study of paediatric test use in primary care, we observed significant temporal increases and practice variability in testing. This reflects inconsistency in practice and diagnosis rates and a scarcity of evidence-based guidance. Increased test use generates more clinical activity with significant resource implications but conversely may improve clinical outcomes. Future research should evaluate whether increased test use and variability are warranted by exploring test indications and test results and directly examine how increased test use impacts on quality of care.
Clinical Informatics Foundations of 57 Years Sentinel and Genomic Surveillance: Data Quality, Linkage and Access
Sentinel surveillance networks are sophisticated health information systems that warn about outbreaks and spread of infectious diseases with epidemic or pandemic potential, the effectiveness of countermeasures and pressures on health systems. They are underpinned by their ability to turn data into information and knowledge in a timely way. The Royal College of General Practitioners (RCGP) Research and Surveillance Centre (RSC) is one of Europe's oldest. We report its progressive use of technology to improve the scope of sentinel surveillance, with a focus on genomic surveillance. The technologies include terminologies, phenotypes, compute capability, virology including virial genome sequencing, and serology. The RSC's data collection developed from partial, then full extraction of computerised medical record (CMR) data. with increasing sophistication in its creation of phenotypes. The scope of surveillance in 1967 was clinical diagnosis, influenza-like-illness (ILI) was its focus. In the 1992-1993 winter virology sampling started, with progressively more sophisticated sequencing of the viral genome. From 2008 viral sequencing was comprehensive with the Global Initiative on Sharing All Influenza Data (GISAID) the primary repository, supplemented by the COVID-19 Genomics UK (COG-UK) consortium in-pandemic. High quality primary care data captures sociodemographic features, risk group status, and vaccine exposure; linked hospital and death data informs about severe outcomes; virology identified the causative organism and genomic surveillance the variant. Timely data access and analysis will enable identification of new variants resistant to vaccination or other countermeasures and enable new interventions to be developed.
Defining and Risk-Stratifying Immunosuppression (the DESTINIES Study): Protocol for an Electronic Delphi Study
Background: Globally, there are marked inconsistencies in how immunosuppression is characterized and subdivided into clinical risk groups. This is detrimental to the precision and comparability of disease surveillance efforts—which has negative implications for the care of those who are immunosuppressed and their health outcomes. This was particularly apparent during the COVID-19 pandemic; despite collective motivation to protect these patients, conflicting clinical definitions created international rifts in how those who were immunosuppressed were monitored and managed during this period. We propose that international clinical consensus be built around the conditions that lead to immunosuppression and their gradations of severity concerning COVID-19. Such information can then be formalized into a digital phenotype to enhance disease surveillance and provide much-needed intelligence on risk-prioritizing these patients. Objective: We aim to demonstrate how electronic Delphi objectives, methodology, and statistical approaches will help address this lack of consensus internationally and deliver a COVID-19 risk-stratified phenotype for “adult immunosuppression.” Methods: Leveraging existing evidence for heterogeneous COVID-19 outcomes in adults who are immunosuppressed, this work will recruit over 50 world-leading clinical, research, or policy experts in the area of immunology or clinical risk prioritization. After 2 rounds of clinical consensus building and 1 round of concluding debate, these panelists will confirm the medical conditions that should be classed as immunosuppressed and their differential vulnerability to COVID-19. Consensus statements on the time and dose dependencies of these risks will also be presented. This work will be conducted iteratively, with opportunities for panelists to ask clarifying questions between rounds and provide ongoing feedback to improve questionnaire items. Statistical analysis will focus on levels of agreement between responses. Results: This protocol outlines a robust method for improving consensus on the definition and meaningful subdivision of adult immunosuppression concerning COVID-19. Panelist recruitment took place between April and May of 2024; the target set for over 50 panelists was achieved. The study launched at the end of May and data collection is projected to end in July 2024. Conclusions: This protocol, if fully implemented, will deliver a universally acceptable, clinically relevant, and electronic health record–compatible phenotype for adult immunosuppression. As well as having immediate value for COVID-19 resource prioritization, this exercise and its output hold prospective value for clinical decision-making across all diseases that disproportionately affect those who are immunosuppressed.
Tracking cortical entrainment to stages of optic-flow processing.
In human visual processing, information from the visual field passes through numerous transformations before perceptual attributes such as motion are derived. Determining the sequence of transforms involved in the perception of visual motion has been an active field since the 1940s. One plausible family of models are the spatiotemporal energy models, based on computations of motion energy computed from the spatiotemporal features the visual field. One of the most venerated is that of Heeger (1988), which hypotheses that motion is estimated by matching the predicted spatiotemporal energy in frequency space. In this study, we investigate the plausibility of Heeger's model by testing for evidence of cortical entrainment to its components. Entrainment of cortical activity to these components was estimated using measurements of electro- and magnetoencephalographic (EMEG) activity, recorded while healthy subjects watched videos of dots moving left and right across their visual field. We find entrainment to several components of Heeger's model bilaterally in occipital lobe regions, including representations of motion energy at a latency of 80 ms, overall velocity at 95 ms, and acceleration at 130 ms. We find little evidence of entrainment to displacement. We contrast Heeger's biologically inspired model with alternative baseline models, finding that Heeger's model provides a closer fit to the observed data. These results help shed light on the processes through which perception of motion arises in the visual processing stream.
Associations of long-term nitrogen dioxide exposure with a wide spectrum of diseases: a prospective cohort study of 0·5 million Chinese adults
Background: Little evidence is available on the long-term health effects of nitrogen dioxide (NO2) in low-income and middle-income populations. We investigated the associations of long-term NO2 exposure with the incidence of a wide spectrum of disease outcomes, based on data from the China Kadoorie Biobank. Methods: This prospective cohort study involved 512 724 Chinese adults aged 30–79 years recruited from ten areas of China during 2004–08. Time-varying Cox regression models yielded adjusted hazard ratios (HRs) for the associations of long-term NO2 exposure with aggregated disease incidence endpoints classified by 14 ICD-10 chapters, and incidences of 12 specific diseases selected from three key ICD-10 chapters (cardiovascular, respiratory, and musculoskeletal diseases) found to be robustly associated with NO2 in the analyses of aggregated endpoints. All models were stratified by age-at-risk (in 1-year scale), study area, and sex, and were adjusted for education, household income, smoking status, alcohol intake, cooking fuel type, heating fuel type, self-reported health status, BMI, physical activity level, temperature, and relative humidity. Findings: The analysis of 512 709 participants (mean baseline age 52·0 years [SD 10·7]; 59·0% female and 41·0% male) included approximately 6·5 million person-years of follow-up. Between 5285 and 144 852 incident events were recorded for each of the 14 aggregated endpoints. Each 10 μg/m3 higher annual average NO2 exposure was associated with higher risks of chapter-specific endpoints, especially cardiovascular (n=144 852; HR 1·04 [95% CI 1·02–1·05]), respiratory (n=73 232; 1·03 [1·01–1·05]), musculoskeletal (n=54 409; 1·11 [1·09–1·14]), and mental and behavioural (n=5361; 1·12 [1·05–1·21]) disorders. Further in-depth analyses on specific diseases found significant positive supra-linear associations with hypertensive disease (1·08 [1·05–1·11]), lower respiratory tract infection (1·03 [1·01–1·06]), arthrosis (1·15 [1·09–1·21]), intervertebral disc disorders (1·13 [1·09–1·17]), and spondylopathies (1·05 [1·01–1·10]), and linear associations with ischaemic heart disease (1·03 [1·00–1·05]), ischaemic stroke (1·08 [1·06–1·11]), and asthma (1·15 [1·04–1·27]), whereas intracerebral haemorrhage (1·00 [0·95–1·06]), other cerebrovascular disease (0·98 [0·96–1·01]), acute upper respiratory infection (1·03 [0·96–1·09]), and chronic lower respiratory disease (0·98 [0·95–1·02]) showed no significant association. NO2 exposure showed robust null association with external causes (n=32 907; 0·98 [0·95–1·02]) as a negative control. Interpretation: In China, long-term NO2 exposure was associated with a range of diseases, particularly cardiovascular, respiratory, and musculoskeletal diseases. These associations underscore the pressing need to implement the recently tightened WHO air quality guidelines. Funding: Wellcome Trust, UK Medical Research Council, Cancer Research UK, British Heart Foundation, National Natural Science Foundation of China, National Key Research and Development Program of China, Sino-British Fellowship Trust, and Kadoorie Charitable Foundation.
National trends in heart failure mortality in men and women, United Kingdom, 2000–2017
Aims: To understand gender differences in the prognosis of women and men with heart failure, we compared mortality, cause of death and survival trends over time. Methods and results: We analysed UK primary care data for 26 725 women and 29 234 men over age 45 years with a new diagnosis of heart failure between 1 January 2000 and 31 December 2017 using the Clinical Practice Research Datalink, inpatient Hospital Episode Statistics and the Office for National Statistics death registry. Age-specific overall survival and cause-specific mortality rates were calculated by gender and year. During the study period 15 084 women and 15 822 men with heart failure died. Women were on average 5 years older at diagnosis (79.6 vs. 74.8 years). Median survival was lower in women compared to men (3.99 vs. 4.47 years), but women had a 14% age-adjusted lower risk of all-cause mortality [hazard ratio (HR) 0.86, 95% confidence interval (CI) 0.84–0.88]. Heart failure was equally likely to be cause of death in women and men (HR 1.03, 95% CI 0.96–1.12). There were modest improvements in survival for both genders, but these were greater in men. The reduction in mortality risk in women was greatest for those diagnosed in the community (HR 0.83, 95% CI 0.80–0.85). Conclusions: Women are diagnosed with heart failure older than men but have a better age-adjusted prognosis. Survival gains were less in women over the last two decades. Addressing gender differences in heart failure diagnostic and treatment pathways should be a clinical and research priority.
Risk factors for SARS-CoV-2 among patients in the Oxford Royal College of General Practitioners Research and Surveillance Centre primary care network: a cross-sectional study
Background: There are few primary care studies of the COVID-19 pandemic. We aimed to identify demographic and clinical risk factors for testing positive for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) within the Oxford Royal College of General Practitioners (RCGP) Research and Surveillance Centre primary care network. Methods: We analysed routinely collected, pseudonymised data for patients in the RCGP Research and Surveillance Centre primary care sentinel network who were tested for SARS-CoV-2 between Jan 28 and April 4, 2020. We used multivariable logistic regression models with multiple imputation to identify risk factors for positive SARS-CoV-2 tests within this surveillance network. Findings: We identified 3802 SARS-CoV-2 test results, of which 587 were positive. In multivariable analysis, male sex was independently associated with testing positive for SARS-CoV-2 (296 [18·4%] of 1612 men vs 291 [13·3%] of 2190 women; adjusted odds ratio [OR] 1·55, 95% CI 1·27–1·89). Adults were at increased risk of testing positive for SARS-CoV-2 compared with children, and people aged 40–64 years were at greatest risk in the multivariable model (243 [18·5%] of 1316 adults aged 40–64 years vs 23 [4·6%] of 499 children; adjusted OR 5·36, 95% CI 3·28–8·76). Compared with white people, the adjusted odds of a positive test were greater in black people (388 [15·5%] of 2497 white people vs 36 [62·1%] of 58 black people; adjusted OR 4·75, 95% CI 2·65–8·51). People living in urban areas versus rural areas (476 [26·2%] of 1816 in urban areas vs 111 [5·6%] of 1986 in rural areas; adjusted OR 4·59, 95% CI 3·57–5·90) and in more deprived areas (197 [29·5%] of 668 in most deprived vs 143 [7·7%] of 1855 in least deprived; adjusted OR 2·03, 95% CI 1·51–2·71) were more likely to test positive. People with chronic kidney disease were more likely to test positive in the adjusted analysis (68 [32·9%] of 207 with chronic kidney disease vs 519 [14·4%] of 3595 without; adjusted OR 1·91, 95% CI 1·31–2·78), but there was no significant association with other chronic conditions in that analysis. We found increased odds of a positive test among people who are obese (142 [20·9%] of 680 people with obesity vs 171 [13·2%] of 1296 normal-weight people; adjusted OR 1·41, 95% CI 1·04–1·91). Notably, active smoking was linked with decreased odds of a positive test result (47 [11·4%] of 413 active smokers vs 201 [17·9%] of 1125 non-smokers; adjusted OR 0·49, 95% CI 0·34–0·71). Interpretation: A positive SARS-CoV-2 test result in this primary care cohort was associated with similar risk factors as observed for severe outcomes of COVID-19 in hospital settings, except for smoking. We provide evidence of potential sociodemographic factors associated with a positive test, including deprivation, population density, ethnicity, and chronic kidney disease. Funding: Wellcome Trust.