It’s often said that data is the lifeblood of any organization. Data leads to information, information leads to knowledge and, over time, knowledge leads to wisdom. This data, which primes the pump and drives the entire process, comes in many forms, including sound, pictures, patient information and financial budgets. However, as indispensable as data is to everyday operations, it’s also burdensome. It accumulates faster than dust balls on a hardwood floor and, eventually, we all feel the burden of data storage.
As we mentioned in “The Root of All Data” (January 2009 JEMS) and “Time to Decide” (“Going Electronic” supplement to August 2009 JEMS,) Montgomery County (Texas) Hospital District (MCHD) is a data-centric operation. We purposefully and strategically collect, analyze and use a substantial amount of data and its derivatives every day. Much of that data is collected to meet legal and regulatory requirements. However, we also collect large amounts to support our clinical goals, decisions and business best practices. As a consequence of our dedication to data-driven operations, we’ve developed extensive systems for retaining, storing and managing our data across all three time frames—real-time, short-term backup and long-term archival.
For this discussion we focus on short-term backup solutions.
(Editor’s Note: Click here for extras from this issue, including data backup glossary terms.)
One Size Doesn’t Fit All
Volumes have been written on digital data backup, so we’ll attempt to provide a “35,000 foot” overview of some of the concepts key to the design and implementation of an effective data backup function.
One main concept is data retention. Obviously, the amount of data you’ll retain is crucial to how you design and configure your backup system. The more data you choose to retain, the larger, more complex and more robust your backup operations will need to be. The size of your data set will guide decisions, such as the type of backup media you’ll use, the frequency with which you’ll perform full versus incremental backups and where you’ll store data.
At MCHD, the decision regarding how much data we retain is made when we determine which data we’ll collect. Like many of you, we’re a governmental entity and required to retain very nearly all our collected data (no matter the form). Although your circumstances and needs may differ, there’s no doubt that your organization will collect a substantial amount of data that should be retained.
Make It Dependable
Another key concept to consider is reliability. You might one day have to ask yourself, “What good is backed-up data that is written on irretrievably damaged media?” Or, “What good is backed-up data that is, itself, corrupted?”
Frighteningly, a number of failure vectors are inherent in the backup process. A single corrupted file can weaken the integrity of your entire data set and, unfortunately, you can’t see corrupted or damaged digital data with the naked eye. You may not even know it’s there … until it’s too late.
No matter the cause, backed-up data that has lost its integrity can be as bad as no backed-up data at all.
For the above reasons and more, it’s important that your backup system have robust mechanisms for detecting errors on the media as well as provisions for validating the integrity of the data being written to it. Your system should also have the ability to timely and effectively alert you to any problems it has detected.
Plan for Easy Recovery
Yet another key concept is that of data recoverability. We view “recoverability” as a measure of the time and complexity involved in regaining full operational status following a full (or partial) catastrophic failure.
The time and complexity involved in recovering data depends on the quantity of data involved, frequency and type of backups (full or incremental), and the accessibility of the backup media (i.e., data is in a manager’s desk drawer down the hall, locked in a bank vault or stored on an off-site server).
Let’s talk through several scenarios.
Scenario 1: Full backups are done irregularly.
Although better than nothing, this situation would likely result in a time-consuming and complex recovery process. In the event of a catastrophic failure, all the data that’s collected but not backed up would simply evaporate. As such, you would need to re-enter or re-capture all data collected since the previous backup. The difficulties of re-entering or recapturing lost data might easily dwarf the time and complexity associated with recovering the data from the backup media.
Scenario 2: Full backups are done weekly, and incremental backups daily.
This is a big leap from the previous situation. Doing regular full backups augmented by regular incremental backups requires some front-end time and effort. However, in the event of a catastrophic failure, this approach will pay huge dividends.
To fully recover under this scenario, your organization needs to merely recover the data from the last full backup and then recover the data contained in each intervening incremental data set. Although this recovery process can take time and be somewhat complex, it helps ensure the integrity and completeness of the recovered data. Fortunately, only the data collected in the hours since the previous backup (full or incremental) would need to be re-entered or recaptured.
Scenario 3: Full backups are done daily.
This represents the ultimate in recoverability. Although it requires a sizable commitment of resources, this recovery process is likely the least time-consuming and least complex of our three scenarios. To retrieve your data, simply mount the backed-up data set and let the recovery software do its magic. As above, only the data that was collected subsequent to the last backup will need to be re-entered or recaptured.
You’ve Got It, So Protect It
Finally, let’s discuss data security. You’re probably already compliant with the legal obligations imposed by the Health Insurance Portability and Accountability Act (HIPAA) and provide a secure environment for other operational data. But you might be surprised how many of even the most security-conscious organizations fail to properly secure their backed-up data.
Imagine the risk associated with allowing a staffer (even a trusted one) to take a set of unencrypted data tapes or CDs/DVDs to their home for safe-keeping. Although having a backup data set stored off-site is definitely a positive, having an unencrypted data set “walking around” in the real world is a potentially serious breach of security.
The above situation is more common than you might think, but it doesn’t have to be. A number of alternatives are available for electronically transmitting encrypted data to a backup server located at a remote site that you control. You can “co-locate” your backup hardware and data at a local “server farm.” Or you can use an Internet service. Some considerations for the latter choice, besides cost, are the accessibility of your data and the physical security of the site. (For third-party vendors, you should also consider financial stability, historical “up-time” performance and trustworthiness.)
Your organization should use the “sneakernet” approach of physically moving data with a removable media storage device only as a last resort. But if that’s your only option, at least make sure the data is kept in a secure location, such as a fireproof safe or safe-deposit box, and that you can get quick access to it whenever needed.
MCHD’s data security plan precludes us from detailing our process, but we can tell you that we do a full backup every night. We transmit encrypted data over a dedicated channel to a facility that authorized MCHD staff and agents can always access.
It’s worth noting that, notwithstanding the considerable amount of data that we collect and retain every day, our analysis told us that a “deep” set of daily, full backups ensures we meet our data integrity objectives. Perhaps more important, this arrangement means we meet our objective of minimizing the complexity and time required to completely recover from a catastrophic system failure.
Balance Benefit & Cost
In a perfect world, everyone would have the time and resources to do full backups daily. But many organizations feel such a high level of commitment isn’t warranted on a risk-adjusted basis. Such an organization might rightly tell you that catastrophic system failures are very rare and, therefore, the expense of developing systems and processes for extensive data backups isn’t worth it. In other words, the perceived benefit does not outweigh the perceived cost.
Those objections are well noted. It’s up to each organization to determine how it will approach its data storage and backup needs. Just a few things to keep in mind: Fires in data centers happen, disk drives and servers crash, and malicious human action against your data is always a possibility.
If those things aren’t enough to keep you up at night, then consider the nearly infinite number of other things that could happen this very night that might precipitate a panicky field supervisor call with those dreaded words, “I think something’s wrong with our computers.”
Remember: Bad things happen to good organizations. JEMS
Kelly Curry, BS, RN, LP, is the chief operating officer of the Montgomery County (Texas) Hospital District. Contact him at [email protected]
Calvin Hon, BS, LP, MCITP, is a project coordinator in the facilities and systems technology department of the MCHD. Contact him at [email protected]
Matt Folsom, LP, is a project coordinator in the facilities and systems technology department of the MCHD. Contact him at [email protected].
Acknowledgement:Michael Lambert, president of the consultancy PaladinSG, contributed to this article.
Montgomery County is located just to the north of Houston. MCHD is an independent governmental entity that provides the county’s indigent health care services and 9-1-1 emergency medical response. MCHD also operates an 800-MHz radio system that supports other governmental entities and emergency responders in the county.
Get a list of glossary terms from this article at jems.com/extras.