Administration and Leadership, Columns, Operations

Artificial Intelligence and EMS

Issue 11 and Volume 42.

Advances in artificial intelligence impacts EMS

In the early 1980s, Jack Stout wrote a series in JEMS introducing the concepts of fractile response times, system status management (SSM) and high-performance EMS. For the next 15 years, he worked tirelessly to explore new thinking about how EMS was delivered.1

Jack showed us that knowledge is gained by capturing data, and that the insights gained could give us powerful information and make us more efficient.

SSM was revolutionary because Jack showed us all how to correlate computer aided dispatch data, incident dates, times and locations to predict peak times in order to schedule the right number of ambulances in the proper geographic post locations to manage call volume in an efficient and cost-effective manner.

SSM merged multiple data reports, often from multiple computer systems, into one readable report that factually calculated information that was undeniably correct. It was then up to system status managers to use the data and program computers, often manually, to deploy their resources.

If you think SSM was a powerful tool for EMS, wait until we introduce you to the use of drones and artificial intelligence (AI) in EMS at EMS Today: The JEMS Conference in Charlotte, N.C. It’s like SSM on steroids!

We’ll be introducing AI in a comprehensive, full-day preconference workshop on Tuesday, Feb. 20, and we’ll also present an amazing, fast-paced session on the same technologies on Wednesday, Feb. 21, from 10:00–11:30 a.m.

For those thinking this is like Star Wars and not applicable to their EMS system, hold on because I’m going to take you on a fast, explanatory ride into this new technology galaxy.

Artificial Intelligence & Machine Learning

AI and machine learning (ML) are going to allow us to do millions of complex things in a nanosecond.

Although the terms are often used interchangeably, they’re different. AI is the broader concept of computers with the ability to act intelligently enough to perform tasks usually attributed to humans. ML is often referred to as a subset of AI, and is what most of us know as the current state-of-the-art.2 Although AI encapsulates a host of different approaches to solving complex tasks, ML builds on a specific premise around the system being able to learn on its own from data.

ML is all about transitioning toward a paradigm where the data speaks for itself through the models, allowing us to capture knowledge that wouldn’t otherwise be apparent to humans.

AI is classified into two groups: vertical and general. Vertical AI exists today and includes systems that can intelligently trade stocks and shares, or operate autonomous vehicles. They’re designed to excel at a specific purpose, but they don’t generalize about a broader set of problems.

Generalized AIs aren’t yet available. The concept is focused on the idea of a “thinking” machine. At some point in the future, systems will actually process information independently like a brain would, and handle complex tasks independent of human action—learning from its mistakes.2

Today’s AI and ML solutions are already extremely powerful. Both concepts can hold, store and process more information and research than traditional computers can today. Consequently, both can work faster and in a more comprehensive manner than the human brain.

The result: AI and ML are now able to predict (i.e., tell us independently) what the patient’s problem is before a full set of vital signs is taken.

This concept isn’t new. What’s new is that we’ve finally developed ways to implement it. A recent article in The New York Times discussed how new technologies were testing the limits of computer semiconductors and how researchers have been able to closely mimic specific brain functions.3

The first challenge engineers faced was the enormous amount of computing power needed to move into the AI era. Computers, though powerful, were maxed out in their capabilities and necessitated a whole new way of thinking.

Engineers and programmers have now learned how to mimic the brain and turn computers into multicenter processing systems that can store incredible amounts of data, analyze it, perform multiple simultaneous processes and calculations from different sensory and stimulus areas, blend/merge them together into an incredible “data milkshake,” and conduct incredibly complex actions that we take for granted—all in a nanosecond.

Take a 1-year-old child, not yet mature enough to know that a hot pot on a stove is dangerous. He climbs up the stove and touches the hot pot. The infant’s beautiful but uneducated, preprogrammed brain, biologically loaded with an amazing amount of data, processes what’s occurring, and instantly sends a report (corrective action) to multiple pathways.

In a nanosecond, the infant’s brain senses the danger, processes the location and extent of the hazard, charts a path for escape, and sends signals to multiple programmable action pathways that allow independent biological systems to take immediate actions: the infant rapidly pulls back his hand, cries out to alert his parents of his plight, turns his body, flees the hazard area and runs to his parents for care.

Engineers are now able to recognize the limits of computers and build AI machines that think and act like the human brain, where a central brain stem oversees the nervous system and offloads tasks—like hearing and seeing—to the surrounding cortex.3

What’s amazing is that AI machines work on systems that allow them to navigate the physical world by themselves.

The Big Change

For five decades, computer makers built systems around a single, do-it-all chip, the central processing unit (CPU). The next generation systems, capable of AI, are now are dividing work into tiny pieces and spreading them among vast “farms” of simpler, specialized chips that consume less power.3

For example, Google’s servers now have enormous banks of custom-built chips that work alongside the CPU, running the computer algorithms that drive speech recognition and other forms of AI.3

In 2011, Google engineers led a team that explored the idea of neural networks and computer algorithms that can learn tasks on their own, like recognizing words spoken into phones or faces in a photograph.

To support this innovation, Google had to more than double its data center capacity. An engineer proposed that Google build its own computer chip just to run this kind of AI.

Experts now know that machines spreading computations across a vast number of tiny, specialized, low-power chips can make more efficient use of the energy at its disposal to allow them to do many more things seamlessly.

For example, Microsoft builds software that runs on an Intel CPU. Windows can’t reprogram the chip because it’s hardwired to only perform certain tasks.

The new chips, called field programmable gate arrays (FPGAs), are chips that can be reprogrammed for new jobs on the fly, just like reprogramming an EMS officer’s radio in the field.

In the fall of 2016, a team of Microsoft researchers built a neural network that could recognize spoken words more accurately than the average human could.3

The leading internet companies are now training their neural networks with help from another type of low-powered chip called a graphics processing unit (GPU) that can process the math required by neural networks far more efficiently than CPUs.3

What Does It Mean for EMS?

If you watch Jeopardy, you may know that IBM programmed Watson, its AI machine, to play against the game show’s top winners of all time.

Watson won, hands down. Why? Because Watson was loaded with more facts in an hour than the average brain can absorb and process in a lifetime.

When 8,000 research papers on cancer are published around the world every day, it’s impossible for one team of exceptional physicians to keep up with it all. On a segment of 60 Minutes, IBM illustrated how it programmed Watson to read medical literature in one week. With this knowledge, Watson then read 25 million papers on cancer in another week. Watson then scanned the internet for open clinical trials.4

With this information, Watson was able to present new treatment options to learned physicians at a molecular tumor board meeting. AI can hold more, process more, recognize and recommend actions faster—and perhaps better—than the best medical minds in the country.

Andreas Cleve will be at EMS Today to present on the AI power in Corti, an augmentation platform for emergency dispatchers that’s presently in use in the Copenhagen EMS communications center in Denmark.

A Watson-like AI system, Corti helps the call-taker come to fast and precise conclusions by finding patterns in the caller’s description of what’s going on. Corti can do this because it can process audio 70 times faster than real time, allowing for advanced live computations. It’s like having an additional dispatcher on every call.

Corti analyzes the full spectrum of the audio signal, including acoustic signal, symptom descriptions, tone and sentiment of the caller, as well as background noises and voice biomarkers. These distinctive features of the call are immediately and automatically sent through multiple layers of artificial neural networks that look for patterns that might be useful for the dispatcher.

There are three diverse types of actions that Corti can immediately initiate or propose:

1. Question-answer patterns: What to ask next to uncover the worst-case scenario;

2. Detections: When the model is confident, it can alert us to potential situations such as stroke or cardiac arrest;

3. Extractions: It can automatically pull information from the call (e.g., address detection and validation) and immediately send it to other systems. This can be invaluable in the case of terrorist or mass shooting events when callers are unsure of where they are.

 

Corti can also transcribe calls in real time, and it not only understands different dialects, but also helps the dispatcher understand what’s being said.

Corti deploys synthetic voice technology to help public safety access points (PSAPs) (particularly those that are extremely busy or understaffed) convert the passive waiting time some calls encounter to active triaging time. Corti can answer basic caller questions while a call is on hold and then send this information to the dispatcher when they’re available.

Today’s PSAPs record calls, but the recordings often end up on a server, only to be heard in rare cases. Corti’s platform has a built-in recording solution that’s embedded with AI models that can analyze every call and, as the call volume increases, predict which calls should be the focus of additional training for dispatchers, which calls should be checked for quality assurance and which calls potentially hold new models about patterns formerly unknown to Corti.

Imagine how useful it would be to have Corti provide dispatchers with a monthly training session where they listen and train on the five to 10 calls that hold the most learning potential. It has the potential to improve dispatch quality with very little effort.

Google’s recent announcement that the company will release a headset that uses AI to instantly translate different languages and an image recognition app that will allow us to point at objects and instantly retrieve information also may be new tools that prove invaluable to EMS crews in the field.5

Find SEPSIS Patients & Alert Hospitals

Nashville-based Intermedix, also exhibiting at EMS Today, has a data science arm (formerly WPC Healthcare) that’s developed an amazing ML system. The system distilled 90,000 articles related to sepsis and correlated 2,200 risk attributes (i.e., variables).Using this knowledge, it can rapidly analyze each patient encounter in a single facility over the last 24 months to score whether or not a patient has a high likelihood for sepsis—based solely on a few data parameters entered at registration—and without the clinical team providing vital signs, lab results, or having to study a patient through observation.

One Intermedix hospital subscriber reported that, out of 1,535 patients entering their ED, the system identified 26 patients who had sepsis. Their positive predictive value (i.e., where they correctly identified sepsis) was 96.3%. This meant that patients weren’t missed or discharged inappropriately.

Equally significant was their 99.7% negative predictive value. This means that the software correctly identified those patients who didn’t have sepsis (before they were physically assessed by someone in the hospital), allowing that hospital to avoid costly tests and possibly admission.

With 1.5 million sepsis cases occurring in the United States, causing 285,000 deaths and resulting in $24 billion in hospitalization costs,6 one can easily see the cost benefit for a hospital that subscribes to this type of technology. If they’re smart, these hospitals will work with their EMS systems to integrate and correlate their records and prehospital radio reports.

Conclusion

AI and ML are a reality and hold great promise to impact our lives and the lives of our patients. Both will expand our capabilities in the future.

Built upon a wealth of preprogrammed information processes through new neural networks and specialty chips, AI systems can now gather and process information, question and trends in an instant.

EMS managers and communications center directors need to learn about them at the JEMS/EMS Today Conference and watch for more on this amazing technology in JEMS and jems.com.

References

1. Heightman AJ. (July 23, 2017.) EMS community honors Jack and Todd Stout at Pinnacle. JEMS. Retrieved Sept. 28, 2017, from www.jems.com/articles/2014/07/ems-community-honors-jack-and-todd-stout.html.

2. Marr B. (Dec. 6, 2016.) What is the difference between artificial intelligence and machine learning? Forbes. Retrieved Sept. 28, 2017, from www.forbes.com/sites/ bernardmarr/2016/12/06/what-is-the-difference-between- artificial-intelligence-and-machine-learning.

3. Metz C. (Sept. 16, 2017.) Chips off the old block: Computers are taking design cues from human brains. The New York Times. Retrieved Sept. 28, 2017, from www.nytimes.com/ 2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html.

4. A.I. on the brink of changing the world. (Oct. 22, 2016.) 60 Minutes. Retrieved Sept. 28, 2017, from www.youtube.com/watch?v=k7hxi4Aj7W0.

5. Barrett B. (Oct. 5, 2017.) Google takes its assistant’s fate into its own hands. Wired. Retrieved Oct. 6, 2017, from www.wired.com/story/google-takes-assistants-fate-into-its-own-hands/.

6. Sepsis fact sheet. (Nov. 9, 2016.) Sepsis Alliance. Retrieved Sept. 29, 2017, from www.sepsis.org/downloads/2016_ sepsis_facts_media.pdf.