Cardiac arrest has been one of the most elusive clinical conditions for EMS. Recent estimates put the annual incidence of out-of-hospital cardiac arrest in the U.S at 326,200 people, of which only 10.6% will survive.1 For decades, we”˜ve known from the experience in communities such as King County, Wash., that significantly better outcomes are possible.
For years, with little improvement in outcomes and cardiac arrest only representing a single digit percentage of an EMS system’s total call volume, EMS leaders questioned, “Why bother.” Why bother? Let me put it in perspective. If 326,200 people arrest each year and the overall survival rate is 10.6%, we are currently saving 34,577 people annually: moms, brothers, co-workers, friends. Now consider King County as a benchmark of what’s possible. King County reported an overall survival of 19.9% in 2014. If we extrapolate that nationally, if every system performed at that level, overall survival would increase to 64,913 people per year–the total population of Harlingen, Texas.
So you’re probably thinking, “What does that mean for me?” Seattle has focused on cardiac arrest for decades. How could I possibly make a dent in my community? Well, you have the same great people, the same AHA care guidelines, the same defibrillators and the same medications. Why can’t you achieve the same results? That’s exactly what MEDIC, the EMS agency serving Mecklenburg County, N.C., thought too. After years of effort and no major gains, they decided to do two things differently: 1) benchmark King County’s best practices, and 2) introduce a family of measures to support their testing of change ideas to improve outcomes.2
The result? MEDIC increased its overall survival from 10.5% to 15.4%. Imagine how many teachers and plumbers and accountants we might save if we all followed suit.
Why are you measuring?
Measurement is not new to EMS. Every organization measures something: intubation success, response time, accident rates, days in accounts receivable. In healthcare, we frequently find ourselves measuring for one of three reasons: research, accountability or improvement.3
Most of us don’t work with measurement for research much. We may see the results published in a paper in Prehospital Emergency Care. It’s important for building new knowledge that will help us learn what improves care, but it’s not going to help us in real time as we try to improve and manage care every day.
Many of us are familiar with measurement for accountability or control, maybe in the form of contractual measures such as response time reliability or city requirements for staffing levels. It needs to be accurate because leaders may make changes based on how the data compares to the desired goals, but it mostly tells us about the result and not what goes into it or why it’s not where we want it to be.
Measurement used for improvement is what most of us need to manage and improve. It supports us in understanding the outcome–what matters to patients and taxpayers–and the processes that are part of achieving the outcome. These data are real-time and, like an ECG provides a picture over time of the electrical activity of the heart, improvement measures provide an understanding of the performance and variation of important processes. These measures are usually not reported outside the organization and are used by people doing the work to help understand variation and see the effect of improvements. They include the EMS system outcomes–clinical, operational, financial–that are important and the processes of care or practice that enable those outcomes. These data measures are frequent, timely and just enough to help us learn and improve.
Family of Measures
We frequently talk about a performance “measure,” which is also described as an indicator, metric or key process indicator (KPI).4 It’s a single measurement of specific data. For example, in cardiac arrest, the compression rate is a measure of the number of compressions per minute–a single value that can be measured and reported.
This is an important distinction because in order to manage or improve, it is useful to have a family or grouping of measures that include three types of measures: outcome, process and balancing measures.5 Figure 1 shows an example of these three types of measures applied to a cardiac arrest example adapted from Dr. Mickey Eisenberg’s book Resuscitate!6
Outcome measures are the ultimate result; they’re what is important to the patient. It’s the ultimate measure, but it’s also a lagging measure. In other words, you may not see improvement in outcomes for some time, making it a late indicator of the impact of changes. Process measures inform us on how the processes that impact the outcome are working and how our efforts to improve reliability are reducing variation in performance and moving in the direction of our goal. These are the measures we need to understand and manage improvement, because these are the ones that measure things we actually do. We can’t just say, “You need to improve survival to discharge,” and hope it happens. We can say, “You need to perform compressions with the right rate and depth,” or “You need to minimize interruption to CPR,” which are the processes that are linked to better outcomes.
Finally, there is a type of measure you may be less familiar with: balancing measures. These are measures that look at what we are trying to improve from a different direction. Balancing measures help us make sure that as we make improvements, we don’t produce unintended consequences that we don’t want: increasing cost, injuries, reduced productivity, patient dissatisfaction. Together, a family of measures provides a full picture of improvement and the impact it’s creating. No single measure can provide the full information needed.
Measurement Definitions
In 2004, Bruce Moeller, PhD, then a fire chief in Florida, surveyed paramedic agencies throughout the state asking how response times were calculated or defined.7 Out of the hundred plus responses, he received nine different definitions and many of them differed from how a patient or elected official would define the measure. Moeller pointed out that the ambiguity and lack of a patient view limited their usefulness for improvement and benchmarking.
Defining measures is critically important. Organizations such as the National Quality Forum use a detailed format. The NHTSA EMS Performance Measure Project adhered to a modified measure definition format.8 Measure definitions can include several attributes, but most include a title, numerator, denominator, sampling strategy, and if you’ll transform the data from a simple count or measurement to a percentage or rate. It’s also helpful to identify if this is an outcome, process or balancing measure. Table 1 is an example of one measure of stroke care: Suspected Stroke Patients Receiving a Prehospital Stroke Assessment.
Table 1 Measure Definition Attribute |
Stroke Measure Example |
Title |
Suspected Stroke Receiving Prehospital Stroke Assessment |
Description |
To measure the percentage of suspected stroke patients who had a stroke assessment performed by EMS. |
Rationale |
Stroke assessments using prehospital stroke assessment tools can screen for stroke and affect patient destinations. |
Type |
Percentage |
Category |
Process |
Numerator |
Total number of suspected stroke patients who had a stroke assessment performed (CPSS, LAMS, RACE, etc). |
Denominator |
Total number of suspected stroke patients transported by EMS. |
Sample |
All patients |
Frequency |
Monthly |
Display |
Statistical Process Control Chart (P-Chart) |
Proper Display
How we display our data supports how we interpret the data and consider taking action. Be very leery of displays like red, yellow and green traffic lights or gauges or data in tabular formats that are static and do not allow you to see the variation over time. There are several books that go into great detail about the importance and methods of data display.
One method of displaying measures are Shewhart Statistical Process Control (SPC) Charts.9 SPC charts are sophisticated charts that display data in time series so you can visually see the data variation, and they include sigma limits that, combined together with a few simple data rules, enable you to differentiate what is common variation in the process and what are attributable causes worth acting on. SPC charts are the gold standard for healthcare measurement and high reliability organizations. Figure 2 is an example of a control chart from MEDIC in North Carolina.
Figure 2. P-Chart Utstein Survival to Hospital Discharge. Source: Mecklenburg EMS Agency, Charlotte, N.C.
Understanding your data
While data display is critically important to understanding your measures, it’s also important that you act appropriately based on what you see. Only time series data, displayed in a chart can help you differentiate from normal variation and special causes. It also helps us to see if the variation in the process is as tight as we would like and where the process is performing in relation to where we want it to be. If the variation is too much or performance is not reaching your goal, this will not be fixed with slogans, training or focusing in on individuals. It will only improve if the process is improved. How can you transform your data in to measures and displays that matter for you?
References:
1. Mozaffarian D, Benjamin EJ, Go, AS, et.al. (2014). Heart disease and stroke statistics–2015 update: A report from the American Heart Association. Circulation. Retrieved April 28, 2016, from http://doi.org/10.1161/CIR.0000000000000152.
2. Studnek JR, Vandeventer S, Infinger AE, et. al. Increasing cardiac arrest survival by using data & process improvement measures. JEMS. 2013 Jul;38(7):68-70, 72-6, 78-9.
3. Solberg L, Mosser G, McDonald S. The three faces of performance measurement: Improvement, accountability and research. Journal on Quality Improvement.1997 Mar;23(3):135-147.
4. Lloyd, R. Quality healthcare: A guide to developing and using indicators. Boston: Jones and Bartlett, 2004.
5. Provost LP, Murray SK. The healthcare data guide: Learning from data for improvement. San Francisco: Jossey-Bass, 2011.
6. Eisenberg, MS. Resuscitate! How your community can improve survival from sudden cardiac arrest. Seattle: University of Washington Press, 2009.
7. Moeller B. Obstacles to measuring EMS performance. EMS Mgmnt J 2004;1(2):8-15
8. NHTSA. Emergency medical services performance measures: Recommended attributes and indicators for system and service performance. Washington, D.C.: NHTSA. 2009
9. Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care 2003;12:458—464.