Imagine you are a fire commissioner assessing the performance of your fire department over the preceding year. Your department responded to 100 fires, twenty of which were greater than two alarms. During fifteen of the twenty greater alarm fires, the entire structure burned to the ground, as did one or more exposure buildings. There were civilian casualties in six of the fires and firefighter injuries at ten more. When you ask the fire chief about the department’s performance, the chief replies: “Well, our response times are all within five minutes, so we’re doing great.”
Imagine now that you are a city manager evaluating the annual performance of your police department. During the last 12 months, crime has increased substantially with a large uptick in murders. Stolen vehicle incidents are at historic highs, and over 80 percent of homicides remain unsolved. When asked about the department’s response to the increase in crime, the chief replies: “Well, our average response time is under eight minutes, so by our standards, we’ve never been better.”
Do both scenarios seem absurd? Perhaps. However, in many jurisdictions, response time is often the only measure of performance used to evaluate emergency medical services. When reporting EMS performance to civilian oversight bodies, presentations rarely include discussions about patient outcomes, clinical proficiency, or other quality measures of prehospital care; the response time is the primary measure of performance.1,2,3
Measuring EMS exclusively on response times is like measuring the performance of the San Francisco Giants strictly by how fast its pitcher throws the ball. Sure, it helps if the starter can blister a fastball, but that by itself won’t win the game. For EMS, winning means a critical patient, treated and transported by EMS, who walks out of a hospital neurologically intact. It is also a low-acuity patient who is routed to the correct definitive care, so he or she does not have to call 911 again.
Why then, the emphasis on response times? The answer is simple: response times are easy to measure. Metrics like clinical performance and patient outcomes are not. At least that used to be the case.
Do Response Times Matter?
The question, “Do response times matter?” is not a new debate among EMS researchers. There have been several articles on the subject, some supporting response times as a primary performance measure4,5 and others arguing against.6,7,8 When I try to delve deeper into the research, all I can glean is that response times matter – sometimes. But only to a small handful of the myriad of calls we respond to.
Available studies demonstrate a correlation between prehospital response times and patient outcomes for the following pathologies:
- Out-of-hospital cardiac arrest9
- Critical Trauma10
- ST Elevated Myocardial Infarction (STEMI)12
There are other pathologies that we may intuitively infer a connection between response time and outcome (anaphylaxis, severe respiratory distress). However, there is no data available to support any definitive conclusions. Interestingly, the research demonstrates that response time is only one of many critical factors for the conditions that do have a response time-outcome relationship. The assessment and interventions performed by responders during the call, in concert with the choice of patient destination, are equally, if not more consequential. For example:
- Out-of-hospital sudden cardiac arrest survival is the result of early continuous prehospital basic life support CPR (response time-dependent), early defibrillation for “shockable” cardiac dysrhythmias (response time-dependent), prehospital airway management (no response time connection) and a combination of inotropic and antidysrhythmic medications. The in-hospital treatment is only secondary to delivering the patient to the emergency department with a pulse and blood pressure. Moreover, new research shows bystander CPR and AED use has a significant if not higher connection to surviving sudden cardiac arrest than EMS response time to the scene.13,14
Some studies show no statistical difference in critical trauma patient outcomes based on arbitrary ambulance response times, such as “eight minutes or less.”15 However, the 9th edition of Prehospital Trauma Life Support (PHTLS) describes a “golden period,” a time-dependent window to manage shock in a critical trauma patient and get them to surgery, although that period is different from one patient to the next.
Still, the implication is that the earlier care begins, the more likely shock may be reversed improving the chances of survival.16 Short on-scene time is crucial for a multi-system trauma patient. However, according to PHTLS we significantly increase the likelihood of survival if we deliver the patient to a trauma center with external bleeding controlled, correctly oxygenated, appropriately ventilated, and warm.17 Why then, have I never heard a fire or EMS chief boasting about their agency’s trauma on-scene times, tourniquet usage, or overall trauma survival rates to oversite committees?
- Every two years, we renew our advanced cardiac life support certification, which reinforces the goal of a short “door to balloon time” in order to “reduce time to STEMI reperfusion in-hospital.18 If that is the case, then part of the integrated prehospital to hospital STEMI ballet includes a swift EMS response. However, the patient only benefits from a quick response if it pairs with early 12-lead ECG acquisition, accurate prehospital ECG interpretation, and direct transport to the correct specialty hospital. How often does one hear a news story about an EMS agency’s remarkable 12-lead compliance or STEMI door to balloon times compared with others throughout the state?
Like its STEMI cousin, acute stroke patients may benefit if they are delivered to a stroke center within a fibrinolytic window, reducing “door-to-needle time.” Although this can intuitively be associated with a quick EMS response, the benefit only exists if the patient is correctly diagnosed in the field and directly routed to a stroke specialty hospital.19
So, is it reasonable to say response times matter? For critical trauma, cardiac arrest, STEMI, and stroke patients, sure. But the benefits quickly dissipate if the paramedic fails to provide the correct assessment, care, and transport to the right hospital. Despite the lack of research available, for this article, we will intuitively add anaphylaxis and severe respiratory distress to the list.
There. That’s six types of calls. For the dozen or so other reasons people call 911, there is no evidence to support the belief that an ambulance arriving at 9:00 versus 9:06 makes any difference in patient outcome whatsoever. Still, our stakeholders spin themselves into a lather by lumping all 911 calls into the same collective pool; the response time to the stable inebriant is assessed with the same weight as that of the gunshot victim.20
There is research available that examines this reality, notably Salvucci et al.:
“As measured today, response times are iaccurate, incomplete, and inconsistent, and do not measure the important call-to-treatment interval. What is needed is a comprehensive EMS information system that will allow us to examine and better match responses to patient needs. Telematics, emergency medical dispatch, and advanced communication systems will all play a role in the smarter—rather than faster—response system of the future.”21
Some progressive EMS systems are embracing the “smarter response system of the future” that Salvucci et al. describes, using multiple performance measures in addition to response times to make evidence-based decisions.22 However, many do not and continue to make poor decisions using the flawed metrics of “faster.”
Posting on Street Corners: The (Flawed) Argument for ‘System Status’
Entire EMS systems have been structured exclusively on the “faster” measure of performance. One of the outcomes is computer-assisted dispatch paired with dynamic ambulance deployment models, also known as “system status,” which ambulance crews suffer through in many areas of the United States. In all system status/dynamic deployment models, ambulance postings are based on historical locations of high call volume. As the system gets busier, and ambulance resources deplete, the algorithm constantly shuffles and re-shuffles available ambulances to different posts that would allow the crew to respond and arrive at the scene within a set time.
Several studies support the use of the system status model, all of which use response time-based metrics as the exclusive measure of performance. Patient outcomes are notably absent from consideration. One example is a Taiwanese report that uses a time-based metric to create an algorithm for dynamic ambulance deployment.
The authors make an unsupported claim that assumes survival rates are exclusively dependent upon the amount of time between a call for service and arrival at the hospital: “As the survival rate of those with urgent conditions is affected by the ambulances, the performance of EMS can be measured by the Time of Arrival at Hospital (TAH), defined by the time from the EMS call is received until the patient arrives at a hospital.23
A Carnegie Mellon study also takes an algorithmic approach to ambulance deployment, relying exclusively on a supply and demand metric: “In the dynamic setting, free ambulances can be redeployed to nearby bases to improve service levels under changing conditions. This is motivated by the intuition that a temporary repositioning of available ambulances offers better (expected) service to incoming requests until the busy ambulances are freed.24
To be fair, the two studies, and others like them, do demonstrate that a dynamic ambulance deployment system is faster; however, that is all that they show.25 If all we want from EMS is a quick ride to the hospital, we should contract with Uber and Lyft. Many low-acuity patients are doing exactly that, instead of shouldering the astronomical cost of ambulance transport.26 Much of the research, like the previous two cited, are evaluating the entire baseball team solely by the speed of the pitcher’s fastball.
I believe the use of the “faster = better” narrative is a marketing tool for selling system status to policymakers. Focusing exclusively on response times as an argument for a dynamically deployed system, while excluding the metrics of patient outcome, obfuscates its only proven advantage: a dynamically deployed system can run more calls with fewer ambulances than a static system can, which makes it cheaper.
Another consideration that is frequently excluded from the static versus dynamic conversation is the human factor. It is my experience that a 12-hour shift spent sitting in an ambulance, traveling from post to post, punctuated by attempts to make arbitrary response times to manage the dead, dying, and mentally ill is rough. No matter who you are, it is a grueling exercise that takes a toll on mind and body.
In over 18 years, I have never encountered a paramedic who believes that a career spent on a dynamically deployed ambulance is sustainable. If we care about patient outcomes, we need a cadre of EMS professionals who are seasoned, motivated, and dedicated to serving their community. In my experience, system status breeds the opposite.
I could not find any published studies that compare dynamic versus static deployment models analyzing patient outcomes, the success of prehospital procedures, and appropriate hospital destination. Future research should be dedicated to closing this knowledge gap.
The Way Forward – Data, Data, and More Data
A quote, attributed to management guru Peter Drucker, proclaims: “If you can’t measure it, you can’t improve it.”27 We currently operate in a vast EMS “data desert,” and can only view small snippets of EMS performance information. To improve, we must shed more light on what is happening to the patients we treat and transport. Step one is to digitally connect receiving hospitals to EMS agencies and maintain a registry of outcomes.
A model for this is the Utstein criteria, which tracks out of hospital cardiac arrest patients who survive to discharge, utilizing a standardized methodology. The Utstein data is administered nationally by the Cardiac Arrest Registry to Enhance Survival (CARES).28 The outcome database should not be limited to cardiac arrest. All critical patient outcomes (stroke, STEMI, critical trauma etc.) should be collected, shared, and reported in a similar standardized format.
The National EMS Information System (NEMSIS) is the best source we currently have for EMS data collection on a national level. To that end, NEMSIS is laudable. However, the usefulness of NEMSIS data is limited because their policy states: “The dataset does not contain information that identifies patients, EMS agencies, receiving hospitals, or reporting states.”29
It is understandable that, due to HIPAA considerations, NEMSIS will not release information that could identify individual patients. However, there is no reason why EMS agency or statewide data should be withheld. If we believe EMS is a public service, then the public should have access to performance data from their local regional and statewide jurisdictions.
An alternative framework for EMS performance tracking is the California EMS Authority’s annual EMS Core Measures Report. The report identifies seventeen separate performance measures from each participating local EMS authority jurisdiction within the state. Unfortunately, not all California local EMS authorities participate in statewide data collection efforts, nor does California mandate that they do so.”30
Despite the report’s limitations, California remains one of the few U.S. states that openly publishes an annual “report card” of statewide EMS performance that is not limited to response times. Participation in the core measures report should be mandatory.
If our policymakers could see the entire EMS landscape, then “smarter” decisions would follow. Unfortunately, the “data desert” limits our understanding of what is truly happening in the vast EMS veldt. We will continue to myopically obsess over the “faster” measurement of response times until we can broaden our view of what quality EMS is.
As for the San Francisco Giants, I’d like to see their batting averages and RBIs improve, no matter how fast the pitcher throws the ball.
1. “Ambulance Response to Life-Threatening Emergencies | City Performance Scorecards.” Accessed May 9, 2020. https://sfgov.org/scorecards/public-safety/ambulance-response-life-treatening-emergencies.
2. Press Enterprise. “REGION: Response Times an Issue for Fire Departments,” February 2, 2014. https://www.pe.com/2014/02/02/region-response-times-an-issue-for-fire-departments/.
3. Saunders, Lincoln. “Lehigh Acres Renews Ambulance Service Agreement with Lee County for One Year.” WINK NEWS (blog), March 5, 2019. https://www.winknews.com/2019/03/05/lehigh-acres-renews-ambulance-service-agreement-with-lee-county-for-one-year/.
4. Elizabeth Ty Wilde, “Do Emergency Medical System Response Times Matter for Health Outcomes?,” Health Economics 22, no. 7 (July 1, 2013): 790–806, accessed October 5, 2019, https://onlinelibrary.wiley.com/doi/abs/10.1002/hec.285.http://www.emdac.org/docs/Wilde_EMS%20Response%20Times%20&%20Outcomes_Health%20Econ_2013.pdf.
5. EMS1. “Do EMS Response Times Matter?” Accessed May 9, 2020. https://www.ems1.com/paramedic-chief/articles/do-ems-response-times-matter-dfXVRm8yC8DxriBc/.
5. Pons, Peter T, and Vincent J Markovchick. “Eight Minutes or Less: Does the Ambulance Response Time Guideline Impact Trauma Patient Outcome?1 1Selected Topics: Prehospital Care Is Coordinated by Peter T. Pons, MD, of Denver Health Medical Center, Denver, Colorado.” The Journal of Emergency Medicine 23, no. 1 (July 1, 2002): 43–48. https://doi.org/10.1016/S0736-4679(02)00460-2.
7. Price, L. “Treating the Clock and Not the Patient: Ambulance Response Times and Risk.” Quality & Safety in Health Care 15, no. 2 (April 2006): 127–30. https://doi.org/10.1136/qshc.2005.015651.
8. Blackwell, Thomas H., Jeffrey A. Kline, J. Jeffrey Willis, and G. Monroe Hicks. “Lack of Association Between Prehospital Response Times and Patient Outcomes.” Prehospital Emergency Care 13, no. 4 (January 2009): 444–50. https://doi.org/10.1080/10903120902935363.
9. Jill P. Pell et al., “Effect of Reducing Ambulance Response Times on Deaths from out of Hospital Cardiac Arrest: Cohort Study,” BMJ 322, no. 7299 (June 9, 2001): 1385–1388, accessed October 5, 2019, https://www.bmj.com/content/322/7299/1385.
10. National Association of Emergency Medical Technicians. PHTLS: Prehospital Trauma Life Support. Ninth Edition. (2020): Burlington, MA: Jones & Bartlett Learning 37.
11. Kamal, Noreen, Eric E Smith, Thomas Jeerakathil, and Michael D Hill. “Thrombolysis: Improving Door-to-Needle Times for Ischemic Stroke Treatment – A Narrative Review.” International Journal of Stroke 13, no. 3 (April 2018): 268–76. https://www.researchgate.net/profile/Noreen_Kamal/publication/321091361_Thrombolysis_Improving_door-to-needle_times_for_ischemic_stroke_treatment_-_A_narrative_review/links/5a144964aca27240e309ce89/Thrombolysis-Improving-door-to-needle-times-for-ischemic-stroke-treatment-A-narrative-review.pdf.
12. Rachael T. Fothergill et al., “Survival of Resuscitated Cardiac Arrest Patients with ST-Elevation Myocardial Infarction (STEMI) Conveyed Directly to a Heart Attack Centre by Ambulance Clinicians,” Resuscitation 85, no. 1 (January 1, 2014): 96–98, accessed October 5, 2019, https://www.resuscitationjournal.com/article/S0300-9572(13)00732-6/abstract.
13. Guillaume Geri et al., “Effects of Bystander CPR Following Out-of-Hospital Cardiac Arrest on Hospital Costs and Long-Term Survival,” Resuscitation 115 (June 1, 2017): 129–134, accessed December 22, 2019, http://www.sciencedirect.com/science/article/pii/S0300957217301697.
14. Rajan, Shahzleen, Mads Wissenberg, Fredrik Folke, Steen Møller Hansen, Thomas A. Gerds, Kristian Kragholm, Carolina Malta Hansen, et al. “Association of Bystander Cardiopulmonary Resuscitation and Survival According to Ambulance Response Times After Out-of-Hospital Cardiac Arrest.” Circulation 134, no. 25 (December 20, 2016): 2095–2104. https://doi.org/10.1161/CIRCULATIONAHA.116.024400.
15. Peter T Pons and Vincent J Markovchick, “Eight Minutes or Less: Does the Ambulance Response Time Guideline Impact Trauma Patient Outcome?1 1Selected Topics: Prehospital Care Is Coordinated by Peter T. Pons, MD, of Denver Health Medical Center, Denver, Colorado,” The Journal of Emergency Medicine 23, no. 1 (July 1, 2002): 43–48, accessed October 5, 2019, http://www.sciencedirect.com/science/article/pii/S0736467902004602.
16. PHTLS: Prehospital Trauma Life Support. Ninth Edition. 32.
17. PHTLS: Prehospital Trauma Life Support. Ninth Edition. 50, 69, 74, 75.
18. Neumar, Robert W, Michael Shuster, Clifton W Callaway, Lana M Gent, Dianne L Atkins, Farhan Bhanji, Steven C Brooks, et al. “2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care,” n.d., 293.
19. Kamel et al.
20. Inland County EMS Agency, “Performance based contracts annual report January 2019-December 2019.” https://www.sbcounty.gov/icema/main/ViewFile.aspx?DocID=4399.
21. Angelo Salvucci et al., “The Response Time Myth: Does Time Matter in Responding to Emergencies?,” Advanced Emergency Nursing Journal 26, no. 2 (June 2004): 86, accessed October 5, 2019, https://journals.lww.com/aenjournal/Abstract/2004/04000/The_Response_Time_Myth__Does_Time_Matter_in.3.aspx.
22. UC Health EMS, “Annual Report 2018,” https://www.poudre-fire.org/home/showdocument?id=5852.
23. Sean Shao Wei Lam et al., “Dynamic Ambulance Reallocation for the Reduction of Ambulance Response Times Using System Status Management,” The American Journal of Emergency Medicine 33, no. 2 (February 1, 2015): 159–166, accessed December 22, 2019, http://www.sciencedirect.com/science/article/pii/S0735675714007827.
24. Yisong Yue, Lavanya Marla, and Ramayya Krishnan, “An Efficient Simulation-Based Approach to Ambulance Fleet Allocation and Dynamic Redeployment,” in Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012, accessed December 22, 2019, https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/view/5148.
25. Maxwell, Matthew S., Mateo Restrepo, Shane G. Henderson, and Huseyin Topaloglu. “Approximate Dynamic Programming for Ambulance Redeployment.” INFORMS Journal on Computing 22, no. 2 (May 2010): 266–81. https://doi.org/10.1287/ijoc.1090.0345.
26. Leon Moskatel and David Slusky, “Did UberX Reduce Ambulance Volume?,” Health Economics 28, no. 7 (2019): 817–829, accessed December 18, 2019, https://onlinelibrary.wiley.com/doi/abs/10.1002/hec.3888.
27. “The Two Most Important Quotes In Business,” Growthink, last modified February 25, 2017, accessed December 18, 2019, https://www.growthink.com/content/two-most-important-quotes-business.
28. “MyCaresTM,” accessed December 18, 2019, https://mycares.net/.
29. “Request Research Data,” NEMSIS, accessed December 18, 2019, https://nemsis.org/using-ems-data/request-research-data/.
30. “Ems Core Quality Measures Project | EMSA,” accessed December 18, 2019, https://emsa.ca.gov/ems-core-quality-measures-project/.