Commentary, Exclusives

If You Want to Hold the Errors, Hold the Swiss

The goal of this article is to make readers aware of what the "Swiss cheese" phenomenon is, how to spot it and some tips on how to prevent errors. (Image by the author).

On a snowy Wednesday afternoon in Washington, D.C. in January 1982, Air Florida Flight 90 crashed into the 14th Street Bridge over the Potomac River, just after takeoff from Washington National Airport. It claimed the lives of 74 of the 79 passengers and crew and an additional four people on the bridge. The subsequent National Transportation Safety Board investigation demonstrated how this accident was a prime example of the “Swiss cheese model.”

The Swiss Cheese Model

What does Swiss cheese have to do with errors in aviation and EMS? The phrase was coined by James Reason, a British psychologist and safety-management researcher in 1990. In any inherently risky industry, such as aviation or medicine, there is risk, and subsequently factors in place to mitigate that risk. Risk mitigation plans contain many layers, such as established procedures, training or equipment.

Related

Each piece of this plan can be thought of as a slice of cheese, and when put together, is analogous to many slices of cheese laid one atop the other. Despite this, even the best risk mitigation plans have failures. These failures can be active failures, such as failing to follow procedure or errors in decision-making, or latent conditions such as flows in the plans or systems themselves. Each failure is like a hole in that slice of cheese, resulting in Swiss cheese. Because these layers are often staggered, however, an error that may slip through one hole is often caught by the next mitigation system in place. Sometimes, however, holes do align, and this is when disaster strikes.

Analogy of an Aviation Accident

The trouble for Flight 90 started well before takeoff. Prior to scheduled departure, all departures from the airport had been halted due to the snowstorm, and the plane sat at the gate nearly two hours. Before pushing back, the plane was de-iced, but because the same vehicle was operated by two different technicians, and the mixture used on the right and left side of the aircraft was significantly different. Once it was finally cleared to push back, the “tug” vehicle initially could not get traction, causing the pilots to use “reverse thrust” to attempt to back up from the gate. Reverse thrust is normally used just after landing to help slow the aircraft down, and Boeing had specifically warned against using this under these conditions. The use of reverse thrust caused additional ice and snow to be blown up onto the wings.

Once a properly equipped “tug” vehicle was available, the aircraft was finally able to leave the gate, only to be placed in a taxi line for 49 minutes prior to reaching the runway. Throughout this time, snow continued to fall, and the temperature remained in the mid-20s. This amount of time would normally necessitate an additional de-icing, but the pilots, unfamiliar with cold-weather operations and already behind schedule, elected to not be de-iced for fear of losing their place in line. Once lined up for takeoff, the pilots continued to have off-topic conversation, violating “sterile cockpit” rules and taking their focus off of the critical task at hand. During their checklist performed immediately prior to takeoff, they called for the engine anti-ice feature to remain off for an unknown reason. Air traffic control told the flight to expedite their takeoff due to another aircraft on final to land right behind them, adding to the pressure on the flight crew. The snow and ice accumulated in the engines caused falsely high thrust readings, which resulted in the aircraft attempting to takeoff with significantly less power than required for the short runway. Snow and ice on the wings also resulted in reduced aerodynamic performance. While the pilots acknowledged that the gauges didn’t seem right, they continued off-topic conversation and failed to abort the takeoff. Seconds after becoming airborne, the aircraft stalled, and crashed into the bridge.

The primary cause of the crash listed by the NTSB was listed as the flight crew’s failure to maintain a sterile cockpit during the final takeoff checklist. This led to failing to engage engine anti-ice heaters, combined with failing to abort the takeoff when abnormalities were noted. Multiple other contributing factors were also listed, including the prolonged ground delay with improper anti-icing applied, the improper use of reverse thrusters on pushback, and failure to return to get deiced after a significant delay on the taxiway. In addition, rescue efforts were hampered due to the weather. It is believed that 19 occupants actually survived the initial crash, but died in the frigid waters.1 Despite valiant efforts by a US Park Police helicopter crew, it took over 20 minutes for the first ambulances to reach the scene due to the snow, further hampering efficient rescue operations.2

In the Swiss cheese model analogy, each safety measure in place is represented by a slice of cheese. These safety measures are not perfect, and just like Swiss cheese, they have holes in them. On occasion, people poke even more holes in the cheese by violating safety procedures as seen in the Air Florida incident. The most effective safety systems have more layers and less holes – the more holes in the system, the greater chance that failure will progress through all layers and result in disaster.

Applying the Swiss Cheese Model to EMS

Nearly every EMS agency has protocols and procedures in place to prevent errors. Each of these represents a slice of cheese. Inherently, there are small holes in all of these. Take a hypothetical from the current COVID-19 pandemic. An agency has established thorough protocols to ensure crews do not suffer an exposure by providing strict policies on donning proper personal protective equipment (PPE), screening patients and having specific transport protocols.

While in concept this may seem like a solid piece of cheese, in reality, there are inherent holes in the plan – what if the primary ambulance suddenly fails and the crew has to switch into a reserve unit that isn’t fully outfitted with proper PPE and hands-free chest compression device? What if that call that occurs along with the primary vehicle failing is a cardiac arrest? Because critical equipment isn’t moved over, a hole in the cheese is created. The human factor pokes even more holes in the cheese, and the more holes, the more likely error will get through safety systems and cause disaster.

Un-aligning the Holes

While we can make efforts to close the holes in our cheese, it can be difficult to plan for each possible condition, and make every system perfect. It is likely that the best we can hope for is to rearrange our slices of cheese so that our holes don’t align. The whole basis of the Swiss cheese model is that on that one singular instance, the holes of the cheese aligned and loss occurred. In other words, we can neither eliminate risk, nor can we remove every weakness in our risk-mitigation plans.

What does this mean for us in EMS? We always do well with planning for all eventualities. This can also mean involving those who are not typically in the planning process – ask the crews on the street “where do you see this plan failing?” In planning for COVID-19, I have been surprised in the number of things that our planning team had not thought of, or thought was intuitive for us, but turned out to be unclear or not feasible after talking to more providers in the field. Simple tasks such as proper donning and doffing of PPE may be seemingly simple, but are often overlooked in both initial and continuing education. Even presenting short videos on these simple topics have proved successful in preventing error.

The development of specific checklists, like ones used by flight crews, while seemingly making us more algorithmic, can force us to ensure we operate the same way, every time. As an example, New South Wales Ambulance/Greater Sydney Area Helicopter EMS (HEMS) uses specific “emergency action cards” for critical situations (see Figure 1).

The Greater Sydney Area HEMS uses a series of emergency action cards that guide responders through low-frequency, high risk events that guide responders through definitive steps for rectifying life-threatening clinical changes in the patient’s condition. Courtesy NSW Ambulance/Greater Sydney Area HEMS (free/open access).

These cards start with a statement such as “the patient is hypotensive. Can you help me troubleshoot?” They then move to specific steps, much like a pilot’s checklist, starting with a rapid scan of the patient, specific checks to perform. This rapid assessment is followed by specific action items, as well as prompts for differentials. Along with the development of these checklists, we must also include training and drilling on these – having a checklist is great, but it will inevitably fail if it is not constantly drilled on in training, and used in real-life situations. Checklists are common for high-risk, low frequency events such as rapid sequence induction (RSI), but also consider high-frequency events such as medication administration. In one retrospective study, the use of a medication cross-check tool reduced medication administration errors by 49.0% after implementation.3

Lastly, leverage the idea of crew resource management (CRM). Another example taken from the airline industry, CRM focuses on communication and decision making in high-stress environments. While CRM is deserving of its own article, one basic tenant is shared decision making and allowing all crew members to speak up when something isn’t right.4 In the Air Florida crash, the co-pilot mentions during takeoff that something is off, he stops short of calling for a rejection of the takeoff. Using proper CRM, upon recognition of any issue, any crew member would be empowered to say “STOP.” If the copilot had called to abort to takeoff, lives would have been saved. I tell any provider I run with, whether they are the most experienced paramedic or a brand new EMT, that “if you see anything unsafe, or see something that we shouldn’t be doing, say something.”

It is unlikely that we will ever remove the holes from our Swiss cheese. We work in an inherently risky environment, and as long as we are human, we will continue to have flaws in even the best thought-out plans. The best we can do is to ensure the holes of our Swiss cheese don’t align so that we may provide the safest possible care to our patients.

References

  1. National Transportation Safety Board. Aircraft accident report, Air Florida, Inc., Boeing 737-222, N62AF, collision with 14th street bridge, near Washington National Airport, Washington, DC, January 13, 1982. Washington (DC); 1982 August 10. 142 p. Report No.: NTSB AAR-82-8.
  2. Hiatt F, Kurtz H. Emergency services reacted quickly to jetliner’s crash. Washington Post [Internet]. 1982 January 15 [cited 2020 May 3]; News:[about 4 pages]. Available from: https://www.washingtonpost.com/archive/politics/1982/01/15/emergency-services-reacted-quickly-to-jetliners-crash/d7697e6b-22a5-4895-a5d8-18fe0ff9d2fd/.
  3. Misasi P, Keebler JR. Medication safety in emergency medical services: approaching an evidence-based method of verification to reduce errors. Ther Adv Drug Saf. 2019 Jan 21; 10: 2042098618821916.
  4. Lubnau TE, Okray R. Crew resource management for the fire service. Fire Engineering 2001 August 1; 154(8).