Chairman: Ms Bushra Mushtaq

Organiser: Novartis Pharmaceuticals UK

Sponsorship: This supplement and the meeting on which it was based were sponsored by Novartis. All authors received honoraria, contributed to the development of the manuscript, and retained final control of the content and editorial decisions. Medical writing assistance was provided by Sue Lupton of Novartis. Novartis have checked that the content was factually accurate, balanced and compliant with the Association of the British Pharmaceutical Industry Code of Practice.

S01 (Clinical) audit basics

Mandeep Bindra

Stoke Mandeville Hospital, Buckinghamshire Hospitals NHS Trust, Eye Department, Mandeville Wing, Mandeville Road, Aylesbury, Buckinghamshire HP21 8AL, UK

Correspondence: Mandeep Bindra: mandeep.bindra@nhs.net

My audience contained a wide variety of health care professionals involved in ophthalmology, including nurses, optometrists, orthoptists, imaging technicians, clinical specialists in training and consultants.

This short presentation concentrated on the basics of audit, to try and establish and remind the attendees of what audit is in practice—the audience was polled as to why they thought we do audit. The answers included:

  • because we have to as part of our training,

  • as a process of continual assessment to make sure we have an upwards spiral of excellence rather than a slide to mediocrity,

  • to make sure we are actually doing what we intended to do,

  • to make sure we are following best practice,

  • to compare our practice to that of others and identify reasons for variation in outcomes

  • to identify areas where our performance is sub optimal.

I agree that these are valid points and must stress that clinical audit is useful for both finding good practice, as well as, highlighting not such good practice and should not be thought merely as a way of illustrating deficiencies but also showing that standards can be exceeded.

One of the earliest examples of epidemiological audit was provided by Florence Nightingale in the 1850s. When she went to the military hospitals at Scutari, she was horrified by the high levels of mortality and the filthy state of the wards. Unable to make her voice heard or make those in authority acknowledge the deficiencies, she set about meticulously collecting data on deaths in the hospital—i.e. she established the current standard. She then utilised her team of nurses to provide consistently good nursing care and improved sanitary conditions i.e. she implemented change. She then collected data on the mortality levels after these changes were made i.e. she re-measured her original criteria. She was able to demonstrate a fall in mortality rates from 40% to 2% and with these robust figures was able to drive similar changes through the wider army medical service.

The father of modern clinical audit could be considered to be Ernest Codman who in 1910 investigated individual clinical outcomes following surgical intervention, trying to relate good outcomes to surgical practice. This was the basis of the true clinical audit used in health care systems worldwide today.

Audit was first introduced systematically to the NHS in 1989 with the publication of the white paper ‘Working for patients’[1]. This was developed in 1997 with further legislation (The new NHS) [2] setting up a framework through which NHS organisations are accountable for continually improving the quality of their services and safeguarding high standards of care, by creating an environment in which excellence in clinical care could flourish.

Audit is a key component of clinical governance, which is a core pillar of good practice. Clinical audit is now seen as a fundamental requirement within the NHS, to provide good standards.

The National Institute for Health and Care Excellence (UK) NICE guidelines for best practice in clinical audit were originally issued in 1992 [3]. Within this document NICE provides a statement that I feel clearly defines the role of audit

‘Audit is a quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structure, processes, and outcomes of care are selected and systematically evaluated against explicit criteria. Where indicated, changes are implemented at an individual, team, or service level and further monitoring is used to confirm improvement in healthcare delivery.’

Whilst having a clear understanding of what a clinical audit is, I think that it is also important to understand what it is not, as over time it is easy to depart from these firm principals. A common misconception is to confuse research with audit and here are a few key phrases that define the difference.

  • ‘Research is concerned with discovering the right thing to do: audit is ensuring it is done right [4].’

  • ‘Without research we cannot know the most effective practice. Without audit we cannot known if it is being practiced [5].’

The two processes follow different pathways. Audit is far more than simple collection of data or a survey and is not suitable for using as a means of individual evaluation.

In conclusion, I asked the audience to consider the key components of the audit cycle, and stressed the importance of considering it as a cyclical process (Fig. 1).

In summary my top tips for conducting an audit are:

  • Choose your topic wisely—there should be an expectation of improvement and it should be manageable, e.g. not the entire AMD service—choose an aspect such as patient times in clinic or one specific clinical outcome.

  • Criteria vs standards—These must be clearly defined right at the start—note criteria are broad statements of what should be happening whilst a standard is more quantitative

  • Communication and engagement—all team members must be engaged at the start

  • Register the audit- with your local audit department—it helps their metrics and they can help you!

  • Perform robust and clear data collection and analysis

  • Communication and Engagement of all team members is key and critical for implementing change

  • Make all recommendations arising from the audit SMART (Specific/Measurable/Achievable Relevant, Time specific)

  • Continue the loop—the process of audit is really a spiral- and may involve resetting the previously stated standards

Disclosure MSB received lecture and workshop fees from Abbvie and Novartis.

References

1. Department of Health and Social Security. Working for Patients 1989—London HMSO (C555)

2. Department of Health and Social Care. The New NHS 1997 (cited on 13 Nov 2017). https://www.gov.uk/government/publications/the-new-nhs

3. National Institute for Health and Care Excellence (UK). NICE guidelines for best practice in clinical audit 2002 (cited on 13 Nov 2017). https://www.nice.org.uk/media/default/About/what-we-do/Into-practice/principles-for-best-practice-in-clinical-audit.pdf

4. Smith, R. Audit & Research. BMJ. 1992;305:905-906

5. National Health Service Blood and Transplant Service. The Difference between Clinical Audit & Research 2013 (cited on 13 Nov 2017). https://www.hospital.blood.co.uk/media/26838/difference-between-clinical-audit-research.pdf

Fig. 1 [S01]: The Audit Cycle

S02 Medisoft

Nick Kirby

Head of Business Intelligence, Medisoft Limited, Leeds Innovation Centre, 103 Clarendon Road, Leeds LS2 9DF, UK

Correspondence: Nick Kirby: Nick.kirby@medisoft.co.uk

Medisoft Ltd (Leeds, UK), was founded in 1997 and from its first implementation in a National Health Service, (NHS) trust in 2000, is now in use at over 150 Hospitals in the UK, as well as, in private facilities and overseas. It was recently acquired by Heidelberg engineering GmbH. (Germany) but still operates from its original offices in Leeds, United Kingdom.

Clinical audit was very much at the forefront when it came to specifying the development of the system; in fact the company founders first specified the audit outputs they would like to extract and then built the necessary system to deliver these.

The primary objective was ‘to develop a computer program, in which, detailed analysis of outcomes could be generated as a by-product of using an electronic system for ophthalmology patient record keeping’.

  • The scope of the system can be summarised as one that:

  • allows structured recording of ophthalmic finding/diagnosis/procedure,

  • interfaces with existing trust administrative (booking/billing) systems and imaging equipment, such that images like optical coherence tomography (OCT) can be imported into the patient record,

  • has specialist modules for diabetic retinopathy, cataract, vitreoretinal, glaucoma, and strabismus,

  • provides case management and detailed audit.

Example screens from the program are reproduced below (Figs. 1–3), which using fictitious patient data illustrate the detail that can be captured by any appropriate health care professional caring for the patient. This can start with medical history and baseline visual acuity (VA) and OCT measures, which can be updated following each visit and or treatment. A selection of validated default options are available to facilitate speedy and accurate reporting of surgical procedures and a range of summary screens present the collected data in clear and useful formats.

Some of these summaries can provide a useful summary for a patient and can show things like different treatment events and VA outcomes plotted along a timeline.

Regarding the provision of audit data, one of the main benefits of capturing detailed structured data within an electronic patient record (EPR) system, is the ability to perform in depth reporting and to this end the system is supported by a rich suite of reports which are flexible and fast to run.

The system can also be used to represent patient data collectively, and enables users to filter, export and drill down to explore different components of the data. (Fig. 1).

Illustrated below (Figs. 1–3) are a number of possible reports. It can be seen that one is presented with a summary chart output, with clearly labelled axis and other supplementary contextual data. Reports can easily be outputted to formats such as PDF and Excel

Locally collected data can be compared to those from other sources such as landmark clinical trials as illustrated in Fig. 2. The system can also be utilised beyond clinical outputs and an example is shown where a geographic location based analysis of referral data based on a patient’s home postcode was utilised (Fig. 3).

Display filters could alter the appearance of the map to display different map layers, such as Local Authority Boundaries and could also control providers points, which are displayed such as Hospitals and Optometry Practices and patient postcode points as illustrated.

It is important to look beyond local audit and to think about how data can be pooled together. One of the benefits of recording standardised datasets means we can develop tools to extract and collate data across multiple sites, starting to look at “big data” trends.

Medisoft has facilitated local, regional & national studies and the contribution of data to UK databases, such as the Royal College of Ophthalmologists’ National Ophthalmic Database, but all are subject to necessary governance approvals and sign off. Structured anonymized data is typically extracted and supplied to an investigator for analysis by their own statistical team.

Disclosure NK received lecture fees from Novartis.

Discussion

Q: Can different hospitals and trusts be using different versions of the Medisoft software?

Nick Kirby: Yes—this can be dependent on local agreements and preferences regarding update and integration—but it is thought that most centres are within one or two updates of the latest version

Q: Can detailed glaucoma criteria such as historical equivalence of post- op refraction, intra and post-operative complications be collected and imported into the system?

Nick Kirby: Yes we would encourage the data collection to be comprehensive—as it can then be easily exported into other systems such as excel or statistical analysis packages.

Q: At the moment our centre tends to contact Medisoft directly when we require extraction of data for audit purposes, what do you think about training up local ‘super-users’ at each site to help facilitate this process?

Nick Kirby: Yes this is something we are considering and are preparing some specialised local audit training offerings. As a part of local service agreements we can also help with local requirements for specific data extraction and this is also useful to Medisoft in seeing if there are common requirements that could be provided as standard within the system.

Fig. 1 [S02]: Example of report available from the Medisoft system

Graphical outputs from Medisoft’s audit software mediSIGHT® are reproduced with permission. © Copyright 2017 Medisoft Limited.

Fig. 2 [S02]: Example report from Medisoft comparing local outcomes to those from randomised controlled studies

Graphical outputs from Medisoft’s audit software mediSIGHT® are reproduced with permission. © Copyright 2017 Medisoft Limited.

Fig. 3 [S02]: Example report from Medisoft showing a geographic location based analysis of referral data based on patient’s home postcode

Graphical outputs from Medisoft’s audit software mediSIGHT® are reproduced with permission. © Copyright 2017 Medisoft Limited.

S03 Introduction to statistics

Irene Stratton

Senior Medical Statistician, Gloucestershire Retinal Research Group (Above Oakley Ward), Cheltenham General Hospital, Sandford Road, Cheltenham, Gloucestershire GL53 7AN, UK

Correspondence: Irene Stratton: irene.stratton@nhs.net

My experience in the statistical analysis of ophthalmological data was initiated with a project involving the National Ophthalmological Database (NOD), where I worked with a number of senior ophthalmologists in producing a report for the healthcare quality improvement partnership (HQIP). This was a feasibility audit of anti-vascular endothelial growth factor (VEGF) treatments for age related macular degeneration. While referencing this experience in my presentation, I will not discuss specifics, as a range of manuscripts reporting this work is correctly in preparation and will be published shortly.

In discussing statistical consideration in ophthalmology audit, I have assumed that the audience would have available:

  • Electronic pressure records (EPR) for ophthalmology

  • Data extraction tools

  • A choice of data fields

Where an audit is performed on the number of anti-VEGF injections being utilised the data should ideally include:

  • Age at first injection

  • Index of multiple deprivation

  • Starting VA

  • Visual acuity 12 months after first injection

  • Number of injections

However in a routine clinical situation, audit is not as simple as it might first appear. A major factor is that the available data is not as ‘clean’ as data from a randomised clinical trial. In a trial, sponsored clinical research assistants will have robustly checked all data supplied for accuracy and completeness against source data documents. The case record forms and data entry procedures may do basic data checks for extreme values or for consistency.

There can be gaps in the data supplied for reasons which may include:

  • New staff may not know how to complete the EPR

  • Injection records may be missing

  • Patients may be lost from the clinic for reasons including death, co-morbidities and transportation problems

  • Visual acuity may not be collected at every visit

  • Visual acuity may be collected at a different visit from the injection

Where the data record is incomplete, patients will have to be excluded from analyses of change in VA as only those with a baseline VA and at least one assessment of VA post injection are included.

There are statistical issues specific to Ophthalmology and there is now an Ophthalmological statisticians group [1] of which I am one of 17 members, who are producing a series of educational papers in the British Journal of Ophthalmology to disseminate knowledge of these issues.

The first issue is whether to reference ‘patients’ or ‘eyes’, the first treated eye tends to be chosen because once the first eye has been treated, if affected, the second eye’s disease tends to be picked up earlier in the disease process with less visual impairment. Use of both eyes can complicate statistical analysis unnecessarily [2].

One should then consider age at first injection, as younger patients tend to have better outcomes. Electronic medical record (EMR) data utilisation can be limited when patients had treatment before entry into the system, and if this is not documented these patients too have to be excluded. For the patients with complete data included in the EMR, age at first injection can be considered as the median, 25th and 75th centiles, as well as the minimum and maximum.

Another criterion that can be tracked is the ‘index of multiple deprivation’, this is a UK government qualitative study of deprived areas in English local councils. It can be ascertained within Medisoft or externally via web resources [3].

In the United Kingdom I have found that although there is a degree of variation in the level of deprivation between centres it does not generally impact on either the level of follow up nor clinical outcomes of the patients.

As a statistician I have been concerned with the lack of constancy in methods of conversion of VA scores recorded in Snellen chart to logMAR chart entries and the need for empirical measures of low vision such as hand movements and counting fingers to be converted to numerical values. For baseline vision measures it can be usual to allow visual acuities recorded within a 2-week window prior to the first injection to be utilised.

The next VA measure needed is that after 12 months treatment—it may not be at exactly 52 weeks after first treatment and a window of 48–56 weeks is permissible. I refer back to my earlier comments regarding missing data, and note that reporting of deaths can be an issue across sites, it should be noted that there is an expected mortality in this demographic group of ~5% according to the Office for National Statistics. [4]

Other possible reasons for non-attendance at clinic such as co-morbidities and both satisfaction or dissatisfaction with their therapy are rarely recorded within an EMR and where this does occur can be as free text areas which can be challenging to analyse simply.

Previous analyses have simply reported the ‘data as is’ but I feel that this could bias results as those not attending tend to have lower vision and might lead to an over estimation of the efficacy of the intervention. One method is to perform a ‘time to event’ analysis looking at the time to losing or gaining 5, 10 or 15 letters which would be unbiased. This survival type of analysis is complex and will require multiple imputations which require the involvement of an experienced statistician.

Another solution that has in some cases been utilised, is that of last observation carried forward but I feel this to be mistaken, a useful discussion on this topic is available in work by the Biostatistics Center, at George Washington University [5].

The final criteria I will consider is that of the number of injections given in the first year, and for this to be valid the patient needs to have data from a visit at 52 weeks after initial treatment or later.

Disclosures IS received consulting fees from Bayer, lecture fees from Bayer and Novartis, and received grant support from Bayer and Boehringer Ingelheim. IS receives royalties from United Kingdom Prospective Diabetes Study Outcomes Model (UKPDS) outcomes model.

References

1. National Institute for Health Research Statistics Group. Organisation homepage 2017.(cited on 15 Nov 2017). https://statistics-group.nihr.ac.uk/research/ophthalmology

2. Bunce C, Patel KV, Xing W, Freemantle N and Dore CJ. Ophthalmic statistics note 1: unit of analysis. Br J Ophthalmol. 2014;98:408–2

3. Ministry of Housing Communities and local Government. English indices of deprivation 2015.(cited on 15 Nov 2017). http://imd-by-postcode.opendatacommunities.org/

4. Office for National Statistics. Standardised Mortality Ratios 2013. (cited on 15 Nov 2017). https://data.gov.uk/dataset/standardised_mortality_ratios

5. Lachin JM. Fallacies of last observation carried forward analyses. Clin Trials. 2016;13(2):161–8.