Early detection of patient deterioration has been found to lead to reduced mortality risk, reduced length-of-stay and decreased hospital costs, yet identifying patient deterioration is a challenge for clinicians.

A system that passively quantifies patient risk could help clinicians to identify deteriorating patients. To address this need, MCIRCC has developed PICTURE (Predicting Intensive Care Transfers and other Unforeseen Events), a machine learning algorithm utilizing electronic health record (EHR) data to passively and accurately predict ICU transfer or death as a proxy for patient deterioration. 

In addition to predicting deterioration, PICTURE also provides an explanation of the main factors contributing to its prediction in each instance. This transparency is invaluable to clinicians who can use these explanations to guide decisions around patient care.

SIGNIFICANT NEED

Delayed identification of declining patients leads to increased mortality risk, increased length-of-stay, and higher costs for hospitals. Existing scores that quantify patient risk raise too many false alarms and have not been able to reliably predict patient deterioration. PICTURE harnesses machine learning to more accurately predict patient deterioration with fewer false alarms than any system currently available.

COMPETITIVE ADVANTAGE

SEAMLESS INTEGRATION (PASSIVITY)

PICTURE was designed for seamless integration into any hospital. Care was taken into how missing data is handled, resulting in a significantly more robust algorithm. As a result, the model will run successfully using existing lab values for a majority of patients in most hospitals. This ensures that clinicians do not need to change their workflow or practices and the program will continue to make accurate predictions even if policy changes are put into place that alter which tests are routinely ordered. This makes PICTURE easy to implement, simple to interpret, and robust to change.

PREDICTION EXPLANATION

In addition to supplying the end user with a patient deterioration score, SHAP (Shapley Additive exPlanations) can be used to explain classifier predictions. In the clinical setting this provides a layer of transparency that allows the clinician to see why PICTURE is making its prediction, allowing them to quickly determine the legitimacy of a PICTURE alarm which will aid in mitigating the effects of alarm fatigue. When PICTURE catches patient decline, the explanations also lend themselves to suggesting a course of treatment.

FEEDBACK LOOP

This feature allows clinicians to reject PICTURE’s prediction and explanation, and to save the information with the subsequent patient trajectory, thereby further refining the model.

USER DEFINED THRESHOLDS

PICTURE allows each unit in the hospital to specify their desired precision by varying the sensitivity.

LEARNED THRESHOLDS

PICTURE can also learn the threshold for an alarm, tuning the threshold for making predictions in a systematic way and then tracking the clinician’s behavior. By doing so, the system can determine an optimal level for the threshold based on the attitudes of each hospital’s practitioners.

Want to learn more about this product? Interested in becoming an industry partner?