Found 10864 matches for
Developing a survey instrument to assess the readiness of primary care data, genetic and disease registries to conduct linked research: TRANSFoRm International Research Readiness (TIRRE) survey instrument
Background: Clinical data are collected for routine care in family practice; there are also a growing number of genetic and cancer registry data repositories. The Translational Research and Patient Safety in Europe (TRANSFoRm) project seeks to facilitate research using linked data from more than one source. We performed a requirements analysis which identified a wide range of data and business process requirements that need to be met before linking primary care and either genetic or disease registry data. Objective:s To develop a survey to assess the readiness of data repositories to participate in linked research - the Transform International Research Readiness (TIRRE) survey. Method We develop the questionnaire based on our requirement analysis; with questions at micro-, meso- and macro levels of granularity, studyspecific questions about diabetes and gastro-oesophageal reflux disease (GORD), and research track record. The scope of the data required was extensive. We piloted this instrument, conducting ten preliminary telephone interviews to evaluate the response to the questionnaire. Results: Using feedback gained from these interviews we revised the questionnaire; clarifying questions that were difficult to answer and utilising skip logic to create different series of questions for the various types of data repository. We simplified the questionnaire replacing free-text responses with yes/no or picking list options, wherever possible. We placed the final questionnaire online and encouraged its use (www.clininf.eu/jointirre/info. html). Conclusion: Limited field testing suggests that TIRRE is capable of collecting comprehensive and relevant data about the suitability and readiness of data repositories to participate in linked data research. © 2012 PHCSG, British Computer Society.
Agile exploration of electronic health records with application to comparing the quality of blood pressure control in pay-for-performance targets in a cross-sectional study
Computerised Medical Record (CMR) data are widely used for secondary purposes such as service evaluation and epidemiological research. Data are increasingly aggregated from different medical facilities with various CMR vendors over time. It is increasingly difficult to manage the large quantity of data. Experiential learning in diabetes and chronic kidney disease (CKD) suggests simplistic processing can lead to errors. To maximise analytical ability for the Quality Improvement in CKD (QICKD) trial, we developed an agile data management process. By removing the need to import and process data in a relational data-base, we reduced processing and analysis time. We demonstrated usage of our new agile method to rapidly develop complex queries to identify how blood pressure varied between patients included or excluded from Quality and Outcomes Frameworks (QOF) pay-for-performance (P4P) targets in UK primary care. We describe a novel specification language that allows clinicians to focus on identifying variables to extract useful information from CMRs. Data for research questions were available in <1hour instead of longer times previously required through use of an SQL database. © 2013 IMIA and IOS Press.
Audit-based education lowers systolic blood pressure in chronic kidney disease: The Quality Improvement in CKD (QICKD) trial results
Strict control of systolic blood pressure is known to slow progression of chronic kidney disease (CKD). Here we compared audit-based education (ABE) to guidelines and prompts or usual practice in lowering systolic blood pressure in people with CKD. This 2-year cluster randomized trial included 93 volunteer general practices randomized into three arms with 30 ABE practices, 32 with guidelines and prompts, and 31 usual practices. An intervention effect on the primary outcome, systolic blood pressure, was calculated using a multilevel model to predict changes after the intervention. The prevalence of CKD was 7.29% (41,183 of 565,016 patients) with all cardiovascular comorbidities more common in those with CKD. Our models showed that the systolic blood pressure was significantly lowered by 2.41 mm Hg (CI 0.59-4.29 mm Hg), in the ABE practices with an odds ratio of achieving at least a 5 mm Hg reduction in systolic blood pressure of 1.24 (CI 1.05-1.45). Practices exposed to guidelines and prompts produced no significant change compared to usual practice. Male gender, ABE, ischemic heart disease, and congestive heart failure were independently associated with a greater lowering of systolic blood pressure but the converse applied to hypertension and age over 75 years. There were no reports of harm. Thus, individuals receiving ABE are more likely to achieve a lower blood pressure than those receiving only usual practice. The findings should be interpreted with caution due to the wide confidence intervals. © 2013 International Society of Nephrology.
Conducting requirements analyses for research using routinely collected health data: A model driven approach
Background: Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Methods: Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. Results: We have developed a requirements analysis: i-ScheDULEs -The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). Discussion: These requirements and their associated models should become part of research study protocols. © 2012 European Federation for Medical Informatics and IOS Press. All rights reserved.
Business Process Modelling is an Essential Part of a Requirements Analysis. Contribution of EFMI Primary Care Working Group
OBJECTIVES: To perform a requirements analysis of the barriers to conducting research linking of primary care, genetic and cancer data.METHODS: We extended our initial data-centric approach to include socio-cultural and business requirements. We created reference models of core data requirements common to most studies using unified modelling language (UML), dataflow diagrams (DFD) and business process modelling notation (BPMN). We conducted a stakeholder analysis and constructed DFD and UML diagrams for use cases based on simulated research studies. We used research output as a sensitivity analysis.RESULTS: Differences between the reference model and use cases identified study specific data requirements. The stakeholder analysis identified: tensions, changes in specification, some indifference from data providers and enthusiastic informaticians urging inclusion of socio-cultural context. We identified requirements to collect information at three levels: micro- data items, which need to be semantically interoperable, meso- the medical record and data extraction, and macro- the health system and socio-cultural issues. BPMN clarified complex business requirements among data providers and vendors; and additional geographical requirements for patients to be represented in both linked datasets. High quality research output was the norm for most repositories.CONCLUSIONS: Reference models provide high-level schemata of the core data requirements. However, business requirements' modelling identifies stakeholder issues and identifies what needs to be addressed to enable participation.
Consistent data recording across a health system and Web-enablement allow service quality comparisons: Online data for commissioning dermatology services
Sharing of health data though the effective deployment of information systems should allow safer and more efficient health systems. However, to date many large IT system deployments in health care have had major short comings. This paper critically appraises the UK National Programme for IT and suggests where there are important lessons of for other large scale eHealth projects. Our method combined the classic evaluation methods of Donnabedian with Pawson's realistic review to analyze the impact of the program at health service, locality or major provider, and client-service impact levels. Financial incentives promoted uptake and use of IT systems at all levels. Health service level interventions that were capable of incorporation into clinical workflow were used. These included: a national unique identifier, creation of national registries and electronic transfer of data, records, and results. At the regional and major provider level we identified how vendors offer very different electronic patient record (EPR) systems which influence what is recorded and health care delivery. Using the EPR at the point of care takes longer, but this investment of time creates a more usable record and facilitates quality. National IT systems need to be clinically orientated, patient accessible, and underpinned by a secure, standardized back office system that enables messaging and information sharing between authenticated users. Learning the lessons from the UK and other large system deployments might enable other countries to leap to the forefront of health care computing. © 2012 European Federation for Medical Informatics. All rights reserved.
The provision and impact of online patient access to their electronic health records (EHR) and transactional services on the quality and safety of health care: Systematic review protocol
Background Innovators have piloted improvements in communication, changed patterns of practice and patient empowerment from online access to electronic health records (EHR). International studies of online services, such as prescription ordering, online appointment booking and secure communications with primary care, show good uptake of email consultations, accessing test results and booking appointments; when technologies and business process are in place. Online access and transactional services are due to be rolled out across England by 2015; this review seeks to explore the impact of online access to health records and other online services on the quality and safety of primary health care. Objective To assess the factors that may affect the provision of online patient access to their EHR and transactional services, and the impact of such access on the quality and safety of health care. Method Two reviewers independently searched 11 international databases during the period 1999- 2012. A range of papers including descriptive studies using qualitative or quantitative methods,hypothesis-testing studies and systematic reviews were included. A detailed eligibility criterion will be used to shape study inclusion.A team of experts will review these papers for eligibility, extract data using a customised extraction form and use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) instrument to determine the quality of the evidence and the strengths of any recommendation. Data will then be descriptively summarised and thematically synthesised. Where feasible, we will perform a quantitativemeta-analysis. Prospero (International Prospective Register of Systematic Reviews) registration number: crd42012003091. © 2012 PHCSG, British Computer Society.
Clinicians were oblivious to incorrect logging of test dates and the associated risks in an online pathology application: A case study
Background UK primary care physicians receive their laboratory test results electronically. This study reports a computerised physician order entry (CPOE) system error in the pathology test request date that went unnoticed in family practices. Method We conducted a case study using a causation of risk theoretical framework; comprising interviews with clinicians and the manufacturer to explore the identification of and reaction to the error. The primary outcome was the evolution and recognition of and response to the problem. The secondary outcome was to identify other issues with this system noted by users. Results The problem was defined as the incorrect logging of test dates ordered through a CPOE system. The system assigned the test request date to the results, hence a blood test taken after a therapeutic intervention (e.g. an increase in cholesterol-lowering therapy) would appear in the computerised medical record as though it had been tested prior to the increase in treatment. This case demonstrates that: the manufacturers failed to understand family physician workflow; regulation of medical software did not prevent the error; and inherent user trust in technology exacerbated this problem. It took three months before users in two practices independently noted the date errors. Conclusion This case illustrates how users take software on trust and suppliers fail to make provision for risks associated with new software. Resulting errors led to inappropriate prescribing, follow-up, costs and risk. The evaluation of such devices should include utilising risk management processes (RMP) to minimise and manage potential risk. © 2012 PHCSG, British Computer Society.
Call for consistent coding in diabetes mellitus using the royal college of general practitioners and NHS pragmatic classification of diabetes
Background: The prevalence of diabetes is increasing with growing levels of obesity and an aging population. New practical guidelines for diabetes provide an applicable classification. Inconsistent codingof diabetes hampers the use of computerised disease registers for quality improvement, and limits the monitoring of disease trends. Objective: To develop a consensus set of codes that should be used when recording diabetes diagnostic data. Methods: The consensus approach was hierarchical, with a preference for diagnostic/disorder codes, to define each type of diabetes and non-diabetic hyperglycaemia, which were listed as being completely, partially or not readily mapped to available codes. The practical classification divides diabetes into type 1 (T1DM), type 2 (T2DM), genetic, other, unclassified and non-diabetic fasting hyperglycaemia. We mapped the classification to Read version 2, Clinical Terms version 3 and SNOMED CT. Results: T1DM and T2DM were completely mapped to appropriate codes. However, in other areas only partial mapping is possible. Genetics is a fastmoving field and there were considerable gaps in the available labels for genetic conditions; what the classification calls 'other' the coding system labels 'secondary' diabetes. The biggest gap was the lack of a code for diabetes where the type of diabetes was uncertain. Notwithstanding these limitations we were able to develop a consensus list. Conclusions: It is a challenge to develop codes that readily map to contemporary clinical concepts. However, clinicians should adopt the standard recommended codes; and audit the quality of their existing records.
Accelerating the development of an information ecosystem in health care, by stimulating the growth of safe intermediate processing of health information (IPHI)
Health care, in common with many other industries, is generating large amounts of routine data, data that are challenging to process, analyse or curate, so-called 'big data'. A challenge for health informatics is to make sense of these data. Part of the answer will come from the development of ontologies that support the use of heterogeneous data sources and the development of intermediate processors of health information (IPHI). IPHI will sit between the generators of health data and information, often the providers of health care, and the managers, commissioners, policy makers, researchers, and the pharmaceutical and other healthcare industries. They will create a health ecosystem by processing data in a way that stimulates improved data quality and potentially healthcare delivery by providers of health care, and by providing greater insights to legitimate users of data. Exemplars are provided of how a health ecosystem might be encouraged and developed to promote patient safety and more efficient health care. These are in the areas of how to integrate data around the unsafe use of alcohol and to explore vaccine safety. A challenge for IPHI is how to ensure that their processing of data is valid, safe and maintains privacy. Development of the healthcare ecosystem and IPHI should be actively encouraged internationally. Governments, regulators and providers of health care should facilitate access to health data and the use of national and international comparisons to monitor standards. However, most importantly, they should pilot new methods of improving quality and safety through the intermediate processing of health data.
Trends and transient change in end-digit preference in blood pressure recording: Studies of sequential and longitudinal collected primary care data
Background: End-digit preference (EDP) is a known cause of inaccurate BP recording. Distortion has been reported around pay-for-performance (P4P) indicators. Methods: We studied sequential datasets (n = 148,000 to n = 900,000) and performed a longitudinal analysis of CONDUIT data (n = 250,000) over a 10-year period. We examined general trends in EDP and investigated the impact of diabetes and chronic kidney disease (CKD) P4P targets. Results: EDP reduces over time in both datasets; the percentage of patients with a zero EDP declined from 70% to 27% and 68% to 26% for SBP and DBP respectively. There is more zero EDP at the extremes of BP, but in people with chronic disease, the use of zero EDP was mainly seen at higher BP levels. P4P targets are associated with increased preference for the even end-digit just below target: in diabetes odds ratio (OR) is 1.47 (p = 0.003) for SBP, 1.19 (p = 0.09) for DBP and in CKD OR 1.65 (p < 0.001) for SBP and 1.48 (p = 0.0001) for DBP. Trends observed in pilot data were validated with a longitudinal set. Conclusions: The decline in EDP is levelling off and P4P targets are associated with sub-target-EDP. Primary care should automate BP measurement and recording. © 2011 Blackwell Publishing Ltd.
What are the barriers to conducting international research using routinely collected primary care data?
Background: Primary care is computerized with routine data recorded at the point or care. Secondary use of these data includes: genetic study, epidemiology and clinical trials. However, there are relatively few international studies. Objective: To identify the concepts that might predict readiness to collaborate in international research using routinely collected primary care data Method: Literature review and data gathering exercise, from international Primary Care Informatics working group workshops, and email modified Delphi exercise. Results: To establish whether primary care data are fit for use in a collaborative study information is needed at the micro-, meso-, and macro-level. At the micro- or data level we need to use documented standards for interoperability, computerized records, to facilitate linkage of data. At the meso-level we need to understand the nature of the electronic patient record (EPR) and specific study requirements. At the macro-level: health system, social and cultural context constrain what data are available. The framework defines the information needed at the point of expression of interest, and joining a study. The initial assessment of readiness should be by self-assessment followed by an in depth appraisal more immediately prior to the start of the study. Finally, a sensitivity analysis should be conducted to test the robustness of the data model. Conclusions: The literature focuses on technical issues: interoperability, EPR and modeling; the workshops on socio-cultural and organizational. This framework will form the basis for developing a survey instrument of the initial assessment of readiness for collaboration in international research. © 2011 European Federation for Medical Informatics. All rights reserved.
Reporting observational studies of the use of information technology in the clinical consultation. A position statement from the IMIA Primary Health Care Informatics Working Group (IMIA PCI WG).
To develop a classification system to improve the reporting of observational studies of the use of information technology (IT) in clinical consultations. Literature review, workshops, and development of a position statement. We grouped the important aspects for consistent reporting into a "faceted classification"; the components relevant to a particular study to be used independently. The eight facets of our classification are: (1) Theoretical and methodological approach: e.g. dramaturgical, cognitive; (2) Data collection: Type and method of observation; (3) Room layout and environment: How this affects interaction between clinician, patient and computer. (4) Initiation and Interaction: Who starts the consultation, and how the participants interact; (5) Information and knowledge utilisation: What sources of information or decision support are used or provided; (6) Timing and type of consultation variables: Standard descriptors that can be used to allow comparison of duration and description of continuous activities (e.g. speech, eye contact) and episodic ones, such as prescribing; (7) Post-consultation impact measures: Satisfaction surveys and health economic assessment based on the perceived quality of the clinician-patient interaction; and (8) Data capture, storage, and export formats: How to archive and curate data to facilitate further analysis. Adoption of this classification should make it easier to interpret research findings and facilitate the synthesis of evidence across studies. Those engaged in IT-consultation research shouldconsider adopting this reporting guide.
Key concepts to assess the readiness of data for international research: data quality, lineage and provenance, extraction and processing errors, traceability, and curation. Contribution of the IMIA Primary Health Care Informatics Working Group.
To define the key concepts which inform whether a system for collecting, aggregating and processing routine clinical data for research is fit for purpose. Literature review and shared experiential learning from research using routinely collected data. We excluded socio-cultural issues, and privacy and security issues as our focus was to explore linking clinical data. Six key concepts describe data: (1) Data quality: the core Overarching concept - Are these data fit for purpose? (2) Data provenance: defined as how data came to be; incorporating the concepts of lineage and pedigree. Mapping this process requires metadata. New variables derived during data analysis have their own provenance. (3) Data extraction errors and (4) Data processing errors, which are the responsibility of the investigator extracting the data but need quantifying. (5) Traceability: the capability to identify the origins of any data cell within the final analysis table essential for good governance, and almost impossible without a formal system of metadata; and (6) Curation: storing data and look-up tables in a way that allows future researchers to carry out further research or review earlier findings. There are common distinct steps in processing data; the quality of any metadata may be predictive of the quality of the process. Outputs based on routine data should include a review of the process from data origin to curation and publish information about their data provenance and processing method.