Coding errors in an analysis of the impact of pay-for-performance on the care for long-term cardiovascular disease: a case study

Simon de Lusignan

Clinical Informatics and Health Outcomes Research Group, Department of Health Care Management and Policy, University of Surrey, Guildford GU2 7XH, UK

Benjamin Sun

Clinical Informatics and Health Outcomes Research Group, Department of Health Care Management and Policy, University of Surrey, Guildford GU2 7XH, UK

Christopher Pearce

Melbourne East General Practice Network, Suite 13, Level 1, 317–321, Whitehorse Road, Nunawading VIC 3131, Australia

Christopher Farmer

Department of Renal Medicine, East Kent Hospitals University NHS Foundation Trust, Canterbury CT1 3NG, UK

Paul Stevens

Department of Renal Medicine, East Kent Hospitals University NHS Foundation Trust, Canterbury CT1 3NG, UK

Simon Jones

Clinical Informatics and Health Outcomes Research Group, Department of Health Care Management and Policy, University of Surrey, Guildford GU2 7XH, UK

Cite this article: de Lusignan S, Sun B, Pearce C, Farmer C, Stevens P, Jones S. Coding errors in an analysis of the impact of pay-for-performance on the care for long-term cardiovascular disease: a case study. Inform Prim Care. 2014;21(2):92–101.

Copyright © 2014 The Author(s). Published by BCS, The Chartered Institute for IT under Creative Commons license http://creativecommons.org/licenses/by/4.0/

Author address for correspondence:

Simon de Lusignan

Clinical Informatics and Health Outcomes

Research Group, Department of Health Care

Management and Policy, University of Surrey

Guildford GU2 7XH, UK

Email: s.lusignan@surrey.ac.uk


ABSTRACT

Objective There is no standard method of publishing the code ranges in research using routine data. We report how code selection affects the reported prevalence and precision of results.

Design We compared code ranges used to report the impact of pay-for-performance (P4P), with those specified in the P4P scheme, and those used by our informatics team to identify cases. We estimated the positive predictive values (PPV) of people with chronic conditions who were included in the study population, and compared the prevalence and blood pressure (BP) of people with hypertension (HT).

Setting Routinely collected primary care data from the quality improvement in chronic kidney disease (QICKD—ISRCTN56023731) trial.

Main outcome measures The case study population represented roughly 85% of those in the HT P4P group (PPV = 0.842; 95%CI = 0.840–0.844; p < 0.001). We also found differences in the prevalence of stroke (PPV = 0.694; 95%CI = 0.687– 0.700) and coronary heart disease (PPV = 0.166; 95%CI = 0.162–0.170), where the paper restricted itself to myocardial infarction codes.

Results We found that the long-term cardiovascular conditions and codes selected for these conditions were inconsistent with those in P4P or the QICKD trial. The prevalence of HT based on the case study codes was 10.3%, compared with 11.8% using the P4P codes; the mean BP was 138.3 mmHg (standard deviation (SD) 15.84 mmHg)/79.4 mmHg (SD 10.3 mmHg) and 137.3 mmHg (SD 15.31)/79.1 mmHg (SD 9.93 mmHg) for the case study and P4P populations, respectively (p < 0.001).

Conclusion The case study lacked precision, and excluded cases had a lower BP. Publishing code ranges made this comparison possible and should be mandated for publications based on routine data.

Keywords: clinical coding, computerised, heart diseases, hypertension, incentive, medical records system, reimbursement, research design


BACKGROUND AND SIGNIFICANCE

‘Comparing apples with oranges’

Standard methods of reporting research are well established and widely accepted, though currently there is no guidance on publishing code ranges for papers based on routine data. The Consolidated Standards of Reporting Trials (CONSORT) statement set out clear guidance and led to the development of checklists for the reporting of trials.1 This statement has not only influenced the reporting of trial but also led to the development of more systematic schema planning and reporting of other kinds of studies, brought together in the EQUATOR Network.2 However, neither EQUATOR nor CONSORT contains much about computerised data quality. Stare-HI contains guidelines pertinent to health informatics evaluation, but is similarly silent on code ranges.3

Data in computerised medical records (CMRs) are in two forms: coded data—where an important diagnosis or symptom is represented by a machine–interpretable code, or as free text. Codes are used for a lot more than diagnoses. For example, they are also used to record examination findings, test results, procedures, medications, and allergies. Nearly all CMRs code important information, so data can be searched and linked. Free text is difficult to process because clinical notes often contain qualifiers before words (e.g. unlikely bowel cancer and fear of cancer) and have multiple near synonyms (e.g. coronary heart disease (CHD) and ischaemic heart disease).

Internationally, there are a variety of coding systems available, such as Systematic nomenclature of Medicine—Clinical Terms and International Classification of Diseases (ICD), currently at version 10 (ICD 10). The most used terminology in the UK primary care is the Read terminology version 2 (Read 2); however, a more complex terminology Read clinical terms version 3 (CTV3) is also used by some CMR vendors.4


OBJECTIVE

A recent review concluded that there was insufficient evidence that pay-for-performance (P4P) had improved the quality of care in hypertension (HT),5 and we have published on critique of its findings.6 However, on reflection, we feel there is a major opportunity to improve the standards of the reporting of research studies based on routinely collected data. Fortunately, the authors of this paper have published the code ranges they used to define the long-term conditions in the case study. We carried out this analysis to explore whether the code ranges selected in our case study would identify the people with the long-term cardiovascular conditions listed correctly, and whether there was any difference in blood pressure (BP) between these different groups.


METHOD

We compared the code ranges used in the case study with those used by the P4P program and those used by a clinical informatics group to identify people with these conditions from routinely collected data.

First, we identified the reference terminology used in the case study and a justification for the code ranges used within the reference terminology. We next explored the code ranges used in the case study paper, and compared with the range of codes available within the coding system, using the NHS code browser.7 We manually searched relevant terms related to the conditions used in the case study within the Read 2 terms, identifying all the relevant terms and their associated codes. We compared the use of disease, symptom, procedure, and drug codes that we found with the codes used in the case study.

We identified the long-term cardiovascular conditions in the P4P scheme that correspond to the conditions described in the case study. We extracted the inclusion and exclusion code ranges for each condition category from the P4P business rules, which are available online.8 We also compared the codes used in the case study with those used by our clinical informatics team to detect these same long-term conditions from routinely collected data. The routine data were a convenience sample, those extracted for the quality improvement in chronic kidney disease (QICKD) trial.9,10

We looked at which codes are included and excluded in the case study, P4P, and the QICKD trial, and created a Venn diagram of codes. We then compared the prevalence of disease that would be identified using the case study code ranges with the prevalence calculated using routine (QICKD trial) data, prevalence using P4P code ranges in QICKD data, and nationally stated P4P prevalence. National P4P prevalence is publicly available.11 For chronic kidney disease (CKD), the QICKD study team identified the prevalence of people with CKD using renal function test measures, extracting value ranges associated with three procedure codes (451E—glomerular filtration rate calculated by abbreviated Modification of Diet in Renal Disease Study Group calculation, 44J3—serum creatinine, and 44JF—plasma creatinine level) for CKD.12

We explored a single P4P year, 2009–2010, to see if BPs were different between case study, P4P indicator populations and the QICKD trial population with HT. We report the mean and standard deviation (SD), and use an independent samples t-test to compare the mean systolic and diastolic BP for the people identified by the case study codes with the BP of the people in the two groups (i.e. people with HT identified by P4P but not included in the case study and people identified as having HT in the QICKD trial but not in the case study). We also report the overall population BP for the case study codes compared with the QICKD trial more comprehensive codes.

We analysed the data in SPSS (PASW—IBM Statistics) version 18. We calculated the positive predictive values of a person being identified as having the disease based on routine data; we repeated this process for the P4P indicator group compared with the study population. We quote 95% confidence for the predictive values (95%CI). We also quote the p-values for the two-by-two contingency tables using chi-squared.

The QICKD trial is ethically approved (Clinical Trials Registration: ISRCTN56023731). P4P and the other data used in this paper are publicly available.


RESULTS

The case study used the Read 2 coding classification as its terminology rather than CTV3 as stated, and used a limited subset of the available terms. Consequently, we also report our findings making comparisons with the Read 2 code system. The code ranges included in the study were limited and did not include all the code ranges for each of the conditions specified (Table 1). Apart from CKD, the rest of the conditions in the study did not include any history/symptoms codes. The case study included most but not all of the major codes for the conditions studied.

Although we were able to find the P4P conditions that matched the ones described in the case study [HT and heart failure (HF)], we could not find the matching categories for myocardial infarction (MI) and renal failure (RF) (Table 2). The nearest corresponding conditions in P4P were CHD and CKD. The case study only referred to stroke, whereas P4P includes stroke and transient ischaemic attack (TIA).

Although the case study examined the effect of P4P, the code selection used in the case study was inconsistent with the code range listed under P4P business rules. The HT codes selected did not include G2 (hypertensive disease) (Figure 1). The HT codes used in the study also included cardiac disease monitoring codes (662) that would be classified under codes for HF in P4P. Most codes within the 662 hierarchy are not consistent with HT.

The case study describes using ‘RF’ rather than CKD codes. CKD, not RF, is included in P4P, and codes that the authors included are CKD codes (1Z1), and not RF codes. However, the code selection in the case study for ‘RF’ was in our opinion too broad. The 1Z1 hierarchy includes all CKD codes, including stage 1 and 2 CKDs (normal and mildly impaired renal function, respectively);13 neither of the latter is included in P4P or QICKD trial data used in the analysis. Also for RF, the authors included hypertensive renal disease codes (G22), and for HF, they included hypertensive heart and renal disease (G23), neither of which is included in either P4P nor considered appropriate to include in QICKD trial queries. The codes used for RF in the case study include chronic and unspecified RF (K05 and K06) but not acute RF (K04).

The code ranges used in the case study are generally a subset of the codes ranges used by the clinical informatics team in the QICKD trial. There were codes in the case study that were inappropriately included in the case study but not in QICKD or P4P. These were 662 for HT monitoring, G23 for HF, and G22 for CKD. The case study and P4P did include some codes not included in the QICKD trial.

The case study used some codes whose use, we anticipate, would result in an overestimate of the prevalence of cerebrovascular disease. These were vertebral artery syndrome (G651), subclavian steal syndrome (G652), and vertebrobasilar insufficiency (G656); the QICKD study does not include them because we see them used in routine practice to record clinical diagnoses that might not represent true cerebral ischaemia; the latter is commonly diagnosed clinically in older people with unsteadiness or looking upwards.

With the exception of CKD/RF, the prevalence rates reported in the case study are lower than QICKD and P4P prevalence rates predicted from routine data. The biggest discrepancy is for MI and CHD mismatch, where the case study had a prevalence of 0.48% compared with 2.92% for QICKD, 2.31% for P4P, and 3.44% for the nationally stated P4P. This is mainly due to the case study only selecting codes for MI rather than for CHD. HF showed the most agreement between the case study and P4P and QICKD differing by up to 0.02%.

We estimate that, in HT, the cases included in the case study are about 85% of those in the P4P population, and 83% of those identified as having HT in the QICKD trial due to the difference in the selection of codes. The large mismatch between MI and CHD means that the case study only identified around 21% of people with CHD based on the P4P register codes and 17% of those identified with the broader range of QICKD trial codes; the biggest difference is the use of codes G2, G2z, and G2y for ‘hypertensive disease’, ‘hypertensive disease not otherwise specified (NOS),’ and ‘other specified hypertensive disease’, respectively. These codes represent around 12% of coding, and it is likely that most are people with essential HT. The matches for stroke and HF were much closer. In CKD, the case study used the whole of the 1Z1 hierarchy, so it would have found all the cases of CKD coded; however, this vastly exceeded the national P4P prevalence as it also included CKD classes. The QICKD trial found that disease codes were unreliable, so it used laboratory recordings of renal function, thereby identifying more cases and reporting a higher prevalence than was identified by the P4P indicator.11 This is a special case because CKD is a relatively newly recognised condition (Table 3).14

Table 1 Case study codes compared with selection found in the NHS browser

Table 2 Conditions and codes used in the case study compared with those in P4P

Figure 1 Venn diagram of codes for HT

With the exception of CKD, the negative predictive values were close to parity. This is because there were a very small number of people who were included in the case study disease groups who as far as we could tell did not have the disease (data not shown).

Table 3 Prevalence and PPV of the five conditions using different measures of code selection*

Our snapshot look at HT demonstrated that people in the two excluded groups for codes had lower systolic and diastolic BP. The mean systolic and diastolic BPs for the codes included in the case study were 138.3 mmHg (n = 87,989, SD 15.84 mmHg) and 79.4 mmHg (SD 10.3 mmHg). The means for the people with non-included codes were 137.3 mmHg (n = 17,165, SD 15.31 mmHg) and 79.1 mmHg (SD 9.93 mmHg) for the group with other P4P codes, and 136.4 mmHg (n = 18,8898, SD 15.68 mmHg) and 79.1 mmHg (SD 9.95 mmHg) for people with other QICKD trial codes. These differences are all small but statistically significant (independent samples t-test p < 0.001). For the non-included P4P codes, the mean difference in systolic was 0.9 mmHg (95%CI 0.65–1.17 mmHg), and for diastolic, it was 0.4 mmHg (95%CI 0.19–0.52 mmHg) lower; and compared with the QICKD trial, the mean difference in systolic was 1.82 mmHg (95%CI 1.57–2.06 mmHg), and for diastolic, it was 0.3 mmHg (95%CI 0.17–0.48 mmHg), which are also lower than the values from the case study.

Had the mean for BP been reported for everyone, with HT been reported using the more comprehensive set of codes used in the QICKD study, the mean would have been 137.9 mmHg (SD 15.83 mmHg) instead of 138.2 mmHg (SD 15.84 mmHg).


DISCUSSION

Principal findings

The publication of code ranges in the case study has allowed us to compare the population the authors intended to study with the population actually studied; it is likely that they did not include around 15% of the cases. Where they looked at other comorbidities, they used descriptions or definitions that are different in three out of four diseases from the equivalent in the P4P scheme. It is likely that they have drawn their conclusions from a subset of people included in the P4P scheme and that these in turn maybe a subset of the people who actually have the condition. Our snapshot look at HT suggests that, at the population level, there may have been a small but statistically significantly different mean BP had all the relevant codes been included.

Implications of the findings

Implementing a standardised method of publishing the code ranges, when using routinely collected data for research, and justifying the selection would reduce any potential errors in the results that are due to inconsistencies in code selection. Whilst we do not know whether including all the relevant codes would have affected the outcome of the case study, it demonstrates that there is a difference between groups and a completely avoidable loss of precision in the results. Including an informatician in the study team may help to reduce the potential for errors in code selection.

Table 4 Exemplar layout of codes listed in a study

Comparison with the literature

The literature on comparative effectiveness highlights the potential of using routine data for research,15,16 but also stresses the importance of recognising its limitations and of addressing coding and data quality issues.17 Differences in data recording are reported between data found in the CMR and those found in the billing system,18 and there is recognition that a comprehensive set of data is needed to overcome these challenges.19

If data are utilised without critical appraisal or taking into account the human element of clinical coding, there are risks of errors as coding is part of a complex social interaction between a clinician and a patient.20 These data are often incomplete21 and subject to the vagaries of the computer interface.22

Data from general practice are usually collected with the first and foremost use of improving an individual patient’s care; other uses form a hierarchy that needs to be considered when interpreting data.23 The P4P code ranges are a payment scheme that focuses on delivery of quality care.24

Many codes do not accurately represent whether or not a disease or condition is present in an individual patient.25,26 We have previously demonstrated this in some detail in the case of diabetes, where people were not coded properly and were therefore not included in disease registers.27,28 The people excluded from the register may not receive the associated prompts, and recalls may therefore receive worse care.29

Practices joining data-providing networks such as The Health Improvement Network, the database used in the case study, have quality screening,30,31 and this may have led to selection bias, with the practices having less scope to improve as a result of P4P. Token incentives used in a similar network did improve data quality.32

Finally, the quality of the extract system can affect data quality.

Limitations

This analysis has examined only one study and a limited number of conditions. There are no trials or evidence from prospective studies that standardising code ranges would affect the outcome or conclusion of the case study or other studies. Our paper does not aim to question the conclusion of the case study.

It is possible that the QICKD study database is not representative of the national population and that the frequency of codes found in this study database is not representative of national practice; however, as the principal difference in HT prevalence was from the failure to use the G2 codes used in the P4P business rules, this is unlikely.

Call for further research

New methods are needed for describing the code ranges used in studies. We suggest in the interim adopting the clinical informatics method. The tables and data dictionary developed by the clinical informatics team may provide an initial model, whilst more detailed consensus models are developed. We suggest that all studies of this nature declare and provide a reference terminology, ideally the one that can be browsed by others. Authors are also recommended to use this as a controlled vocabulary, precisely labelling variables using the rubric displayed (e.g. G20 ‘essential HT’). Authors should then list their included codes in three levels of granularity: domain, sub-domain, and term; and using the same three levels any excluded codes (Table 4). We next recommend that a table of code mappings be developed where codes from one hierarchy or coding system are converted to another, as we have demonstrated with ethnicity.33 It is especially in complex studies that browsable lists of variables included in the study should be made available online. We provide links to a downloadable example from the QICKD trial34 and a smaller osteoporosis study (Figure 2).35

We suggest a tabular format for representing codes and an online browsable format for data dictionaries for study variables, which are part of our standard way of processing data for studies based on routine data (Table 4).

In the longer term, we need to move towards ontological methods that need to be developed to define data sets, rather than experts producing coding lists, which are simply added to or edited by other experts.36,37 An ontology, in informatics, is a set of concepts and relationships, and most conditions can be defined by diagnostic criteria and test results, or inferred from the use of therapy, and refuted if the demographics or other data do not fit for that condition.


CONCLUSIONS

Publishing the code ranges has enabled us to critique the premise on which the findings of this paper are based. A failure to justify why a paper about P4P chose to define clinical conditions differently from those used in that scheme resulted in a loss in the precision of its findings. In the absence of other consensus guidance, authors could consider adopting the clinical informatics tables and data dictionary model. Authors of papers written using routine data should publish and justify the code ranges they select.

Figure 2 QICKD data dictionary


ACKNOWLEDGEMENTS

The authors would like to thank clinical informatics team at the University of Surrey (www.clininf.eu). The QICKD trial was supported by the Health Foundation and Edith Murphy Trust.


REFERENCES

1. Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA 1999;276(8):637–9. http://dx.doi.org/10.1001/jama.1996.03540080059030.

2. Mäkelä M, Kaila M and Stein K. Mind sharpeners for scientists: the EQUATOR Network. International Journal of Technology Assessment in Health Care 2011;27(2):99–100. http://dx.doi.org/10.1017/S0266462311000158. PMid:21429289.

3. Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykänen P and Rigby M. STARE-HI--statement on reporting of evaluation studies in health informatics. Yearb Medical Informatics 2009;23–31. PMid:19855867.

4. de Lusignan S. Codes, classifications, terminologies and nomenclatures: definition, development and application in practice. Informatics in Primary Care 2005;13(1):65–70. PMid:15949178.

5. Serumaga B, Ross-Degnan D, Avery AJ, Elliott RA, Majumdar SR, Zhang F, et al. Effect of pay for performance on the management and outcomes of hypertension in the United Kingdom: interrupted time series study. BMJ 2011;25;342:d108.

6. Stevens PE, Farmer CK and de Lusignan S. Effect of pay for performance on hypertension in the United Kingdom. American Journal of Kidney Diseases 2011;58(4):508–11. http://dx.doi.org/10.1053/j.ajkd.2011.06.010. PMid:21816527.

7. NHS Department of Health Informatics Directorate. Technology Reference data Update Distribution. NHS Read Browser version 11. http://www.uktcregistration.nss.cfh.nhs.uk/trud3/user/guest/group/0/pack/7/subpack/8/releases (accessed 26 March 2014).

8. NHS Primary Care Commissioning. QOF Implementation: Business Rules. http://www.pcc-cic.org.uk/ (accessed 26 March 2014).

9. de Lusignan S, Gallagher H, Chan T, Thomas N, van Vlymen J, Nation M, et al. The QICKD study protocol: a cluster randomised trial to compare quality improvement interventions to lower systolic BP in chronic kidney disease (CKD) in primary care. Implementation Science 2009;4:39. http://dx.doi.org/.

10. de Lusignan S, Gallagher H, Jones S, Chan T, van Vlymen J, Tahir A, et al. Using audit-based education to lower systolic blood pressure in chronic kidney disease (CKD): results of the quality improvement in CKD (QICKD) trial [ISRCTN: 56023731]. Kidney International 2013;84(3):609–20. 10.1038/ki.2013.96. PMid:23536132; PMCid:PMC3778715.

11. NHS Information Centre for Health and Social Care. QOF 2009/10 Data Tables. http://www.hscic.gov.uk/searchca talogue?productid=5204&topics=0%2fPrimary+care+service s&kwd=Q&sort=Relevance&size=10&page=6#top (accessed 26 March 2014).

12. de Lusignan S, Tomson C, Harris K, van Vlymen J and Gallagher H. Creatinine fluctuation has a greater effect than the formula to estimate glomerular filtration rate on the prevalence of chronic kidney disease. Nephron Clinical Practice 2011;117(3):c213–24. http://dx.doi.org/10.1159/000320341. PMid:20805694.

13. Wilcox M. Illogical placing of codes within a clinical classification. Informatics in Primary Care 2009;17(2):131; discussion 132. PMid:19807955.

14. Gomez GB, de Lusignan S and Gallagher H. Chronic kidney disease: a new priority for primary care. British Journal of General Practice 2006;56(533):908–10. PMid:17132377; PMCid:PMC1934049.

15. Etheredge LM. Creating a high-performance system for comparative effectiveness research. Health Affairs (Millwood) 2010;29(10):1761–7. http://dx.doi.org/10.1377/hlthaff.2010.0608. PMid:20921473.

16. Sullivan P and Goldmann D. The promise of comparative effectiveness research. JAMA 2011;305(4):400–1. http://dx.doi.org/10.1001/jama.2011.12. PMid:21266687.

17. Hirsch BR, Giffin RB, Esmail LC, Tunis SR, Abernethy AP and Murphy SB. Informatics in action: lessons learned in comparative effectiveness research. Cancer Journal 2011;17(4): 235–8. http://dx.doi.org/10.1097/PPO.0b013e31822c3944. PMid:21799331.

18. Pace WD, Cifuentes M, Valuck RJ, Staton EW, Brandt EC and West DR. An electronic practice-based network for observational comparative effectiveness research. Annals of Internal Medicine 2009;151(5):338–40. http://dx.doi.org/10.7326/0003-4819-151-5-200909010-00140. PMid:19638402.

19. Devoe JE, Gold R, McIntire P, Puro J, Chauvie S and Gallia CA. Electronic health records vs Medicaid claims: completeness of diabetes preventive care data in community health centers. Annals of Family Medicine 2011;9(4):351–8. http://dx.doi.org/10.1370/afm.1279. PMid:21747107; PMCid:PMC3133583.

20. de Lusignan S, Wells SE, Hague NJ and Thiru K. Managers see the problems associated with coding clinical data as a technical issue whilst clinicians also see cultural barriers. Methods of Information in Medicine 2003;42(4):416–22. PMid:14534643.

21. de Lusignan S and van Weel C. The use of routinely collected computer data for research in primary care: opportunities and challenges. Family Practice 2006;23(2):253–63. http://dx.doi.org/10.1093/fampra/cmi106. PMid:16368704.

22. Tai TW, Anandarajah S, Dhoul N and de Lusignan S. Variation in clinical coding lists in UK general practice: a barrier to consistent data entry? Informatics in Primary Care 2007;15(3):143–50. PMid:18005561.

23. Pearce C, Gardner K, Shearer M and Kelly J. A divisions worth of data. Australian Family Physician 2011;40(3):167–70. PMid:21597524.

24. de Lusignan S and Mimnagh C. Breaking the first law of informatics: the Quality and Outcomes Framework (QOF) in the dock. Informatics in Primary Care 2006;14(3):153–6. PMid:17288700.

25. Iezzoni LI, Heeren T, Foley SM, Daley J, Hughes J and Coffman GA. Chronic conditions and risk of in-hospital death. Health Services Research 1994;29(4):435–60. PMid:7928371; PMCid:PMC1070016.

26. McCarthy EP, Iezzoni LI, Davis RB, Palmer RH, Cahalane M, Hamel MB, et al. Does clinical evidence support ICD-9-CM diagnosis coding of complications? Medical Care 2000;38(8):868–76. http://dx.doi.org/10.1097/00005650-200008000-00010. PMid:10929998.

27. de Lusignan S, Khunti K, Belsey J, Hattersley A, van Vlymen J, Gallagher H, et al. A method of identifying and correcting miscoding, misclassification and misdiagnosis in diabetes: a pilot and validation study of routinely collected data. Diabetic Medicine 2010;27(2):203–9. http://dx.doi.org/10.1111/j.1464-5491.2009.02917.x. PMid:20546265.

28. de Lusignan S, Sadek N, Mulnier H, Tahir A, Russell-Jones D and Khunti K. Miscoding, misclassification and misdiagnosis of diabetes in primary care. Diabetic Medicine 2012;29(2): 181–9. http://dx.doi.org10.1111/j.1464-5491.2011.03419.x.

29. Hassan Sadek N, Sadek AR, Tahir A, Khunti K, Desombre T and de Lusignan S. Evaluating tools to support a new practical classification of diabetes: excellent control may represent misdiagnosis and omission from disease registers is associated with worse control. International Journal of Clinical Practice 2012;66(9):874–82. http://dx.doi.org/10.1111/j.1742-1241.2012.02979.x. PMid:22784308; PMCid:PMC3465806.

30. Blak BT, Thompson M, Dattani H and Bourke A. Generalisability of The Health Improvement Network (THIN) database: demographics, chronic disease prevalence and mortality rates. Informatics in Primary Care 2011;19(4):251–5. PMid:22828580.

31. Bourke A, Dattani H and Robinson M. Feasibility study and methodology to create a quality-evaluated database of primary care data. Informatics in Primary Care 2004;12(3):171–7. PMid:15606990.

32. de Lusignan S, Stephens PN, Adal N and Majeed A. Does feedback improve the quality of computerized medical records in primary care? Journal of the American Medical Informatics Association 2002;9(4):395–401. http://dx.doi.org/10.1197/jamia.M1023. PMid:12087120; PMCid:PMC346626.

33. Kumarapeli P, Stepaniuk R, de Lusignan S, Williams R and Rowlands G. Ethnicity recording in general practice computer systems. Journal of Public Health (Oxford) 2006;28(3):283–7. http://dx.doi.org/10.1093/pubmed/fdl044. PMid:16840765.

34. Clinical Informatics Research Group. QICKD Dictionary. http://www.clininf.eu/qickd-data-dictionary.html (accessed 26 March 2014).

35. Clinical Informatics Research Group. Osteoporosis Dictionary. http://www.clininf.eu/osteoporosis-data-dictionary.html (accessed 26 March 2014).

36. de Lusignan S, Liaw ST, Michalakidis G and Jones S. Defining datasets and creating data dictionaries for quality improvement and research in chronic disease using routinely collected data: an ontology-driven approach. Informatics in Primary Care 2011;19(3):127–34. PMid:22688221.

37. Liaw ST, Rahimi A, Ray P, Taggart J, Dennis S, de Lusignan S, et al. Towards an ontology for data quality in integrated chronic disease management: a realist review of the literature. International Journal of Medical Informatics 2012. doi:pii: S1386-5056(12)00193-1. http://dx.doi.org/10.1016/j.ijmedinf.2012.10.001. PMid:23122633.

Refbacks

  • There are currently no refbacks.


This is an open access journal, which means that all content is freely available without charge to the user or their institution. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal starting from Volume 21 without asking prior permission from the publisher or the author. This is in accordance with the BOAI definition of open accessFor permission regarding papers published in previous volumes, please contact us.

Privacy statement: The names and email addresses entered in this journal site will be used exclusively for the stated purposes of this journal and will not be made available for any other purpose or to any other party.

Online ISSN 2058-4563 - Print ISSN 2058-4555. Published by BCS, The Chartered Institute for IT