How should we measure what we do?
Our profession has long been plagued with the task of proving that what we do matters. Though very few studies exist that definitively justify the expense of our advanced EMS systems around the country, two important and comprehensive analyses were completed in 2009 and 2014 that addressed the issue of effectiveness.
The first study made the case that EMS unquestionably improve patient outcomes and health.1 The second declared that EMS was an essential public health service that results in economic good for society.2 The reports were profound, exhaustively researched, and firmly anchored in scientific analysis.
The first document was produced by the National EMS Advisory Council (NEMSAC), the nationally designated assembly of EMS representatives and consumers, established by Congress, to provide advice and recommendations regarding EMS to the federal government.
NEMSAC is comprised of 25 national experts representing every segment of the EMS system. It’s the only advisory body for EMS at the federal level that exists by legislative mandate and is the sole statutory authority organized to provide official advice to the federal government.
The second report was conducted by a study team from the National Academy of Public Administration, commissioned by the National Highway Traffic Administration (NHTSA) Office of EMS.2
Now, we know we matter, and that what we do improves our patients’ lives and our society.
Our next conundrum has always been how to measure what we do. What metrics are important and reveal the quality of the service an EMS agency provides?
This has been an area of great debate over the years, and has been continuously affected by a shifting focus on what constitutes equitable, scientifically based measurements that truly identify an EMS agency’s operational and financial quality. In recent years, emerging research has destroyed many of our old perceptions about what measures and metrics are appropriate in assessing our services.
In recent years, emerging research has destroyed many of our old perceptions about what measures & metrics are appropriate in assessing our services.
Response Time & Productivity
Response times have long been a mainstay metric used to determine the quality of an EMS operation. For decades, response times were used as a fundamental criterion in RFPs (requests for proposals) issued by towns to award EMS contracts. Not anymore!
Several studies in the last ten years (even one dating back to 1953!),and a landmark NHTSA report, have definitively determined four important facts; 3, 4
1. Use of emergency warning devices (EWDs, such as lights and siren) saves minimal time (1.7—3.6 minutes) in both response to the scene and transport to the hospital.5—7
2. Use of EWDs increases the likelihood of ambulance/emergency vehicle crashes and subsequent property damage and injuries to EMS practitioners and the public.5,7—9
3. Use of EWDs increases the stress experienced by patients, adversely affecting their vital signs and blood chemistry.5, 7
4. Extremely few patients benefit from a rapid response to their emergency by EMS.7,10—12 In fact, the most important time interval that correlates statistically to morbidity and mortality, is the total time from initial symptoms to definitive care (i.e., hospital care with physician intervention). This includes dispatch processing, response to scene, time on task (i.e., treating the patient at the scene– notably the lengthiest time segment of them all),13 transport to the hospital and turn over to hospital clinicians (often the second longest time period).
Even with the total time interval, the number of EMS patients that are affected by statistically significant morbidity and mortality is less than 10%.5
As the industry has determined the unimportance of response times and the detrimental effect of using EWDs, prioritized dispatch protocols have been incorporated into many systems, resulting in dramatically fewer cases of EWD response.14, 15 EMS systems around the country are now intentionally decreasing the number of 9-1-1 requests to which they send units using EWDs.
So, what use is measuring response times anymore? Continuing to measure and report these to the public reinforces the notion that this metric means something. Based on the science, we should be educating the public toward the opposite conclusion: EWDs should be used rarely, and response times aren’t an important measure of an EMS operation.
What about productivity as a metric? The efficiency of an EMS operation is certainly important to those paying the bills. This is especially true for those agencies that rely completely on fee-for-service reimbursement to fund their services, and the public whose taxes go to supporting EMS.
The industry standard measure of productivity is the unit hour utilization (UHU). But, local protocols, regulations and expectations of both the public and governing bodies, substantially affect UHU.
For many reasons, including surge capacity and public relations, EMS agencies may need to operate at a less productive level than would otherwise be fiscally prudent. Because of these factors, UHU can’t be equitably used to compare one EMS system against another.
Protocol Compliance & Skill Performance
With response times and productivity off the table, what should we assess as a meaningful measure of our services? Compliance with treatment protocols and successful practitioner skill performance may be an option.
EMS systems around the country have varying levels of patient treatment protocols, and the degree to which practitioners comply with them can be argued to reflect quality of care. Likewise, the more proficient caregivers are with their skills, the higher level of care they provide.
Treatment protocols and scope of practice are more locally determined than not, and echo the standards desired by the cognizant medical community.
Therefore, the more practitioners follow them, the more the EMS agency is fulfilling its mission as part of the regional healthcare system. Additionally, stronger caregiver skills are better for the patient.
However, compliance with protocols and expertise with skill performance is almost always self-reported through the patient care report.
It’s human nature to minimize one’s inadequacies and deviations from protocol. People are reluctant to admit the errors in the care they render.
If we comparatively measure the quality of EMS agencies based on protocol adherence and skill performance–especially if this data is released to the community at large as part of some “report card”–there will be a strong incentive to under-report mistakes and over-report compliance and proficiency.
It will be difficult to design a process that results in more reliable information.
Patient Outcomes
Patient outcomes come to mind as another possible metric.
With outcomes, the only accurate reflection of the impact EMS practitioners have is the difference between the patient’s condition upon our arrival and their status upon transfer to definitive care.
This is hard to measure empirically, because we self-report how the patient is found, and when we transfer them to hospital personnel, we rely either on self-reporting, or the information we get from the receiving nurse or physician.
We can’t depend on discharge diagnosis or status, since we have no control over the care the patient receives in the hospital after we transfer them.
To compare one EMS agency’s quality of patient care to another, we would have to somehow level the playing field to adjust for differences in demographics–some populations are sicker than others and experience worse outcomes.
Medicare and state health departments have accomplished this “equalization” to varying degrees with hospitals.
We also would need to adjust the outcomes data for EMS agencies around the country to compensate for the significantly different base level of health and environmental conditions that exist in our various service areas.
Outcomes may still be a reasonable measure if we can determine reliable mechanisms to obtain this information along with standardized ways to report on the data.
your thoughts?
What are we to do? It seems, from this short analysis, that properly adjusted outcomes data could be a good comparative measure of EMS systems.
What do you think? Comment on the web version of this article on jems.com. In a future issue, I’ll report on your comments and opinions.
References
1. National EMS Advisory Council. (November 2009.) EMS makes a difference: Improved clinical outcomes and downstream healthcare savings. National Highway Traffic Safety Administration Office of EMS. Retrieved Jan. 28, 2018, from www.ems.gov/pdf/nemsac-dec2009.pdf.
2. Van Milligen M. (May 2014.) An analysis of prehospital emergency medical services as an essential service and as a public good in economic theory. National Highway Traffic Safety Administration Office of EMS. Retrieved Jan. 28, 2018, from www.ems.gov/pdf/advancing-ems-systems/ Reports-and Resources/Prehospital_EMS_Essential_ Service_And_Public_Good.pdf.
3. Hunt RC, Brown LH, Cabinum ES, et al. (1995). Is ambulance transport time with lights and siren faster than that without? Ann Emerg Med. 1995;25(6):857.
4. Parks LL. Are speeding, open sirens and red light-breaking by ambulances necessary? Journal of the Florida Medical Association. 1953;40(1):20—22.
5. Kupas DF. (May 2017.) Lights and siren use by emergency medical services (EMS): Above all do no harm. National Highway Traffic Safety Administration Office of EMS. Retrieved Jan. 28, 2018, from www.ems.gov/pdf/Lights_and_Sirens_Use_by_EMS_May_2017.pdf.
6. Brown LH, Whitney CL, Hunt RC, et al. Do warning lights and sirens reduce ambulance response times? Prehosp Emerg Care. 2000;4(1):70—74.
7. Murray B, Kue R. The use of emergency lights and sirens by ambulances and their effect on patient outcomes and public safety: A comprehensive review of the literature. Prehosp Disaster Med. 2017;32(2):209—216.
8. Saunders CE, Heye CJ. Ambulance collisions in an urban environment. Prehosp Disaster Med. 1994;9(2):118—124.
9. Ross DW, Caputo LM, Salottolo KM, et al. Lights and siren transport and the need for hospital intervention in trauma patients. Prehosp Emerg Care. 2016;20(2):260—265.
10. Marques-Baptista A, Ohman-Strickland P, Baldino KT, et al. Utilization of warning lights and siren based on hospital time-critical interventions. Prehosp Disaster Med. 2010;25(4):335—339.
11. Kupas DF, Dula DJ, Pino BJ. Patient outcome using medical protocol to limit “lights and siren” transport. Prehosp Disaster Med. 1994;9(4):226—229.
12. O’Brien DJ, Price TB, Adams P. The effectiveness of lights and siren use during ambulance transport by paramedics. Prehosp Emerg Care. 1999;3(2):127—130.
13. Puolakka T, Strbian D, Harve H, et al. Prehospital phase of the dtroke chain of survival: A prospective observational study. J Am Heart Assoc. 2016;5(5):e002808.
14. Cone DC, Galante N, MacMillan DS. Can emergency medical dispatch systems safely reduce first-responder call volume? Prehosp Emerg Care. 2008;12(4):479—485.
15. Eastwood K, Morgans A, Stoelwinder J, et al. Patient and case characteristics associated with “˜no paramedic treatment’ for low-acuity cases referred for emergency ambulance dispatch following a secondary telephone triage: A retrospective cohort study. Scand J Trauma, Resusc Emerg Med. 2018;26(1):8.