Administration and Leadership, Documentation & Patient Care Reporting, Equipment & Gear, Special Topics

Unraveling the Process of EMS Software Implementation

Issue 11 and Volume 40.

There are probably things that keep you awake at night. Back in my medic days, there were a few. Now that I’m a father of a teenage girl, I’ve significantly added to the list, but both of those lists pale in comparison to the things that keep me awake as the head of an information technology (IT) department.

Well, let’s be honest, none of them keep me awake at night, but they’re continuously running around in my brain giving me plenty of things to think about. Since this is an article about IT-related topics, I’ll leave the parenting and medic discussion for those more learned than I, and focus on technology.

IT is an increasingly scary workplace. Stop and think for a second about some of the major data breaches you’ve heard about just in the last year or so. Target, Home Depot and Sony Pictures come to mind immediately, but there were nearly 750 data breaches in 2014 according to the Identity Theft Resource Center.1 Of those, over 300 breaches occurred in healthcare, which exposed over 8 million records.

So, what can IT folks in the EMS industry do? How we handle software selection, purchasing and implementation has a lot to do with security. Secondly, how we handle the volume of data created by these systems is of equal, if not of more importance.


Data lives in a few distinct places. First, data is created most frequently from front-end software systems. Software that’s designed well starts securing the data at this point in the process. Next, there’s data in motion. This is the point at which data is moved from one device or system to another, in many cases this is a time when data is vulnerable. Next is data at rest. This is where data lives out most of its life on internal and external servers and storage systems. When data breaches occur, this is the stage at which large numbers of records are exposed. When discussing the data lifecycle, I frequently add a fourth category: Data in use. This category covers data when it’s moved from a state of rest into some form of analytical cycle, such as hotspot maps, syndromic surveillance or reporting. Understanding how software systems create, move, store and utilize data is key to putting the safety and security of data, and thus your patient’s protected health information, at the front of any conversation.


Much like data, software lives in different locations. In its simplest form, software either lives locally or “in the cloud.” Rather than define these for you, let’s talk about them in concept.

First, local software lives on your local computer, tablet or smartphone. Examples of local software are an electronic patient care report (ePCR) application that’s installed on a tablet you carry with you, or a computer-aided dispatch or billing software application that runs on your local workstation. It could also be software that’s running on a medical device such as a cardiac monitor, ventilator—you get the picture.

“In the cloud” usually means it’s running any place other than locally. At my agency, we utilize Microsoft Office365 for our email. Office365 is both a local and “in the cloud” system. We use email clients that run locally on our mobile or desktop devices and they connect to Microsoft Exchange 2013 servers, maintained by Microsoft in their data centers all over the country and the world—or “in the cloud.” The emails and attachments themselves live both locally and “in the cloud” on Microsoft servers. This is a classic example of software as a service (SaaS). I’m buying Microsoft Exchange 2013 services for my agency, thus I don’t have to maintain any local Microsoft Exchange 2013 infrastructure. Many organizations choose to have email that runs on local servers; local in this instance is located within the walls of the organization. These are subtle differences, but they’re key concepts in understanding how software works.


One more distinction about “in the cloud” to be discussed is the location of the cloud, and who owns it. You may have heard the terms “private” and “public” thrown around in relation to cloud-based software.

Simply said, private is something you control. It could be inside your own infrastructure, or controlled solely for you by an outside vendor. For instance, you may have an ePCR vendor willing to host the system on their servers, for your use exclusively. That’s still considered the private cloud.

The public cloud is when the resources are shared publicly. Keep in mind this isn’t talking about your data, it’s the servers and storage required for the software platform to function. An example of this would be an ePCR vendor that hosts several customers on the same set of servers.


If you bring up SaaS vs. Cloud in conversation with IT people, it will likely generate the same level of disagreement as Coke vs. Pepsi. Here’s my simple explanation: SaaS is just that; it’s software that you’re buying a fractional right to use and its associated infrastructure. The cloud is a term most frequently used to indicate where something happens. SaaS most frequently happens “in the cloud,” but not always—there are few absolutes in technology anymore. I’d encourage you to not use the two terms interchangeably. When you’re talking to a vendor and they throw out either term, make sure you ask some clarifying questions in order to get a complete picture of how they deliver their services or applications.


This is probably the second most important question to ask. Let’s first consider, “What problem am I trying to solve?” This is known as the problem statement. I’d encourage you to write down the problem statement and make sure everyone in your organization agrees before you head too far down the road.

Here’s an example: Let’s say Frank’s Ambulance Service was having trouble getting paid for its claims because it wasn’t able to submit them electronically. Notice we didn’t write specifications or requirements in this step. All we did was clearly identify the problem.

Now we can work on, “What’s right for me?” You need to look at yourself and your organization pretty honestly in terms of what you’re capable of handling and supporting. How much risk tolerance and downtime are you willing to accept? Are you a control freak? Make sure everyone associated with a project understands where the points of control are. This is especially important if you’re looking at a cloud-based solution.

Be brutally honest when it comes to what you can afford; remember the initial cost isn’t the only cost associated. Make sure you can afford the continuing costs.

As to the question of what you can handle and support, that needs to be broken down. First, what does your IT department look like? Are you a small shop with one person, or do you have a dedicated department? What strengths and weaknesses does your IT department have? Do you have experienced people? How well do they know storage systems?

These are questions that will help you formulate if a local or cloud-based solution is right for you, or perhaps something in between. The smaller the organization and IT department, the better suited cloud-based solutions may be. In many cases, you can purchase premium solutions requiring extensive IT support, but when purchased as a cloud solution require very little local support. In some cases, support can be purchased along with the service or separately.

What about risk tolerance, downtime and control? From the IT perspective, risk tolerance is mainly about downtime and control issues.

For instance, is data stored externally riskier than data stored internally? There are a lot of factors that go into answering that question. Anytime data is in motion, it’s potentially at risk for a breach.

How about downtime? When you have an application that’s hosted locally, you have some control over downtime, right? Yes and no.

How good are your backup systems? Do you have a generator capable of powering all of your systems? Do you have redundant networks? Do you have a redundant connection to the outside world? These are all things you need to know about your infrastructure before you can make an accurate comparison to options hosted in the cloud.

From my perspective, one of the greatest benefits of having applications hosted in the cloud is service level agreements (SLAs). These agreements govern uptime from your cloudbased provider.

Shortly after we transitioned from a locally hosted Exchange server to Microsoft Office365, they suffered a pretty significant outage. They broke their SLA to us and we received a service credit in return.

Granted, email was down for a period, but I knew there was a whole team of experts working night and day on the solution. Email was restored in about 12 hours. Shortly before we transitioned from our locally hosted email system, we had a major email outage. For a cost, we had to engage the assistance of the vendor to help us recover from this problem. One staff member—while not an expert— worked on the problem with the outside vendor for nearly 48 hours. Email was restored in about 72 hours. Which situation would you rather be in?

Cost is an important aspect. It’s unfortunate, but I’ve seen organizations spend large amounts of money up front for a premium technology solution. Then after a year or so, they stop paying for support, put Band-Aid solutions in place and lay off staff required to support it. This all translates to a monumental waste of time and money. When you’re selecting technology solutions, you need to have a little crystal ball to make decisions appropriate for what you believe the future holds.


So, how will you know what solution is best? Sorry to be the bearer of bad news, but you may not always get a clear and concise answer. When you’ve invested time upfront to establish what problem you’re trying to solve, understood all aspects of the possible solutions and analyze dthem for the best fit for your organization, you’ll have the greatest likelihood of coming to the best decision for you and your team.

Remember to always ask questions about any proposed solutions. Don’t be afraid to ask too many questions, there’s no such thing when it comes to selecting and implementing new technology.


1. Identity Theft Resource Center. (Jan. 12, 2015.) Identity theft resource center breach report hits record high in 2014. Retrieved Feb. 5, 2015, from