Invented by Christopher Elliot Gillies, Daniel Francis Taylor, Kevin R. Ward, Fadi Islim, Richard Medlin, University of Michigan
Intensive care transfer, also known as ICU transfer, refers to the process of moving a patient from a general ward to an intensive care unit due to deteriorating health conditions. This decision is often made based on clinical judgment and subjective assessments, which can lead to delays or missed opportunities for timely intervention. Machine learning algorithms, on the other hand, can analyze vast amounts of patient data and identify patterns that humans may overlook, enabling early detection of critical events and proactive decision-making.
One of the key advantages of machine learning in predicting ICU transfer is its ability to process and analyze large datasets in real-time. By integrating electronic health records, vital signs monitoring, laboratory results, and other relevant data sources, machine learning algorithms can continuously monitor patients and identify subtle changes that may indicate a need for intensive care. This proactive approach allows healthcare providers to intervene earlier, potentially preventing complications and improving patient outcomes.
Moreover, machine learning algorithms can adapt and learn from new data, continuously improving their predictive accuracy. As more patient data becomes available, these algorithms can refine their models and enhance their ability to identify high-risk patients. This iterative learning process enables healthcare providers to stay ahead of unforeseeable events and make informed decisions based on real-time predictions.
The market for predicting intensive care transfer using machine learning is not limited to hospitals and healthcare systems. Insurance companies, for instance, can leverage these algorithms to assess the risk of ICU transfer for policyholders and adjust premiums accordingly. By accurately predicting the likelihood of critical events, insurers can optimize their risk management strategies and offer more tailored coverage options.
Beyond ICU transfer, machine learning algorithms have also shown promise in predicting other unforeseeable events in healthcare. For example, they can forecast the likelihood of hospital readmissions, surgical complications, or adverse drug reactions. By identifying patients at high risk, healthcare providers can allocate resources more efficiently, implement preventive measures, and improve patient safety.
However, the adoption of machine learning in healthcare is not without challenges. Ensuring the privacy and security of patient data is of utmost importance. Healthcare organizations must implement robust data protection measures and comply with relevant regulations to maintain patient trust. Additionally, the integration of machine learning algorithms into existing healthcare systems requires careful planning and collaboration between data scientists, clinicians, and IT professionals.
In conclusion, the market for predicting intensive care transfer and other unforeseeable events using machine learning is rapidly expanding. These algorithms offer the potential to revolutionize healthcare by enabling early detection and proactive intervention. As the technology continues to advance and more data becomes available, the accuracy and effectiveness of these predictions are expected to improve, leading to better patient outcomes and cost savings for healthcare systems.
The University of Michigan invention works as follows
The method for predicting patient deterioration comprises receiving an electronic record data set relating to the patient. It then determines a risk value corresponding to the data set using a machine learning model that has been trained, determining an threshold value by using an online/reinforcement model, comparing this risk score with the threshold, and generating an alert when the risk value exceeds the threshold. The non-transitory medium contains program instructions which, when executed, cause a computer to receive the list of patient, to display selectable information for each of them according to an order established by a feature priority algorithm, to receive a selection and to retrieve vital signs information that corresponds to the selected selection.
Background for Predicting intensive care transfer and other unforeseeable events using machine learning
The background description is provided to provide a general context for the disclosure. The work of the currently named inventors is not expressly or impliedly acknowledged as prior to the disclosure, even if it’s described in the background section.
It is difficult for clinicians to identify patient deterioration in general hospital wards. Early detection of deterioration can lead to a 3.5-fold reduction in mortality risk, a five-day decrease in hospital stay and reducing costs by over $13,000 per patient/episode. In the past, rule-based scores like the Modified Early Warning Score and Rothman Index were used to quantify patient risks. To address this, recent statistical learning systems like Electronic Cardiac Arrest Risk Triage(eCART) were developed. These rule-based scoring systems and statistical learning systems, however, result in many false alarms and cannot predict patient deterioration reliably. Moreover, the existing systems don’t use nonlinear models in order to make automatic predictions. “Clinicians may be reluctant to use existing approaches in making clinical decisions because they lack confidence.
The method for predicting patient deterioration in one aspect includes receiving an electronic record of patient data, determining the risk scores corresponding to that patient by using a machine learning model trained, determining the threshold values using an online/reinforcement model, comparing risk score with threshold value and, when the risk exceeds threshold, generating alarm.
The computer can be instructed to display the patient information for each patient in a display, which is selectable. Each patient is displayed according to an algorithm based on the feature importance. It is then prompted to make a selection of one patient from the list.
The present techniques include systems and methods for quantifying risk in patients/care receivers to help clinicians/care providers identify deteriorating cases. Herein, a ?care receiver? Any person who receives care from a caregiver is a care receiver. A care receiver can be, for example, an inpatient or outpatient in a clinic, hospital or other setting, or someone receiving care in a different setting, such as a rehab facility, nursing home or home care facility. A clinician can be a registered nurse, certified nurse assistant, physician, medical doctor, specialist or any other type of care provider (e.g. a Radiologist, Cardiologist or home care provider). The terms “care giver” and “clinician” are used here. Herein, the terms?care giver? The terms?clinician? and ‘patient’ can be used interchangeably. The terms “care receiver” The terms ‘patient’ and ‘care receiver? Both?patient’ and?caregiver’ can be used interchangeably. ?Care? “Care” includes any observation (e.g. visual or instrument-based monitor of a patient, including computer-based instruments) and/or hand-on intervention by a caregiver.
The present techniques include reinforcement-learning (RL) techniques, such as a method for adaptive threshold tuning (ATT), which allows classifiers dynamically to update a threshold prediction based on the behavior of clinicians. In some embodiments, the present techniques use a technique called Predicting Unforeseen Event and Intensive Care transfers that can predict patient deterioration better than current methods (e.g. ICU transfer or activation of rapid response teams, death, etc.). The present techniques, for example, may have fewer false alarms as well as predictions that provide an explanation of top reasons why the prediction was made. This can be used by a clinician to make a decision.
In some embodiments, ATT can be used to predict patient risk. The present techniques can use state-of-the art machine learning (ML), such as deep learning, that has a higher positive predictability (PPV) than any other measure of patient deterioration. The present techniques also possess the ability to automatically generate explanations/interpretations for predictions generated by nonlinear models used in the present techniques using feature importance algorithms (e.g., SHapley Additive exPlanation (SHAP) value analysis). Hardware is available to integrate the present techniques into any hospital systems. “The clinician’s workflow will be minimally altered by implementing the present techniques with or without ATT.
In some embodiments of the present techniques, a gradient-boosting tree algorithm is used to predict ICU transfers or deaths as a proxy measure for patient deterioration. In another embodiment of the present techniques, a deep-learning model is used. The present techniques are generally more accurate and explainable than the techniques known in the past. The predictions that are generated by the present technique may also include an explanation as to the factors which contributed to the prediction. The prediction threshold can also be adjusted based on the behavior of each clinician. The present techniques are also the first to use a nonlinear tree-based classification system for predicting patient outcomes and explaining the factors that influence the prediction of patient outcomes. The present techniques are passive in nature, as they do not require extra work from the clinician and don’t change the environment of care (e.g. the hospital). As mentioned above, whereas the current measures of patient deterioration (e.g. MEWS, Rothman Index, or linear classifiers, e.g. eCart) are simple rule-based predictors, the present techniques use state of the technology (e.g. ML) to provide additional expository data. It is unknown in the art to use non-linear models for predicting and to generate automatic explanations.
Example Computing Environment
FIG. According to one embodiment, FIG. 1 shows an example computing system 100 that can be used for implementing present techniques, such as training, operating and tuning ML models (e.g. ATT), in accordance with a scenario. The environment 100 can include a client device associated with a device server 104 and a network.
The client 102 is located far away from the server and is connected to it via the network 106. The network 106 can include a combination of wired or wireless communication networks such as local area networks (LANs), metro area networks(MANs), or wide area networks(WANs). The network 106 could include, as an example, a cellular network (e.g., 4G), the Internet and a server side LAN. Another example is that the network could support a 4G cellular connection (e.g.) to a mobile phone of a user, and an IEEE 802.11 connection to the client. Although referred to as a “server” herein, In some implementations the server 104 can include multiple servers or other computing devices. The server 104 can also include multiple servers or other computing devices that are distributed across a wide geographic area. In some embodiments multiple clients or servers can be used by various parties. In an embodiment, for example, a clinician can use a client 102 and a patient can use another client 102. The first and second clients may have different functionality, provided by different computer-executable instruction sets and/or hardware configurations. In one embodiment, a patient may use a client and a clinician can use another client. The clinician uses the client to access the user interface, which includes predictions and/or descriptions, as discussed in this document.
The client 102 can include both hardware and software components. The client 102 can be implemented, for example, using a mobile computer device (e.g. a smart phone). The client 102 can include computer-executable instruction for retrieving/receiving the data to render in a graphic user interface (GUI), and/or rendered GUI components (e.g. images, widgets or executable code). The client 102 can be implemented on a laptop or tablet device. The client 102 can include a CPU 120, a Memory 122, a User Interface 124 (input/output), a Network Interface 126 and a User Application 128. The processor 120 can be a single-processor (e.g. a central processor unit (CPU), or it may include multiple processors (e.g. a CPU and graphics processing unit(GPU)). In some cases, the client 102 can include additional components. In certain embodiments, the user 102 can include microphones or video display/recording device. The client 102 can include a vibratory element such as a motor or vibration.
In one embodiment, the sequence can include a clinician viewing an alarm that corresponds to a particular patient or noting it in some other way. The clinician can then examine plotted explanations for vitals, which are sorted by importance according to an algorithm. The clinician will take action if the explanations indicate a patient-related physiological issue. The clinician can ignore, delay or suppress alarms when the explanations don’t suggest a physiological problem.
The memory 122 can be a non-transitory, computer-readable storage device or unit, or a collection of devices/units that includes persistent (e.g. hard disks) and/or not-persistent components. The memory 122 can store instructions executable by the processor 120 for performing various operations. This includes the instructions of different software applications (e.g. the user application 128), and data generated, retrieved and/or received by these applications. In the example implementation shown in FIG. The memory 122 contains at least one user application 128. The processor 120 executes the user application 128, which facilitates bidirectional transmission between the client 102, and the server.
The user interface” 124 comprises hardware, firmware, and/or other software that is configured to allow a user interact with the client 102 (i.e. to both input data and to perceive outputs from) The user interface 124 could include, for example, a touchscreen that has both manual and display input capabilities (e.g. video display device). The user interface 124 can include, for example, a keyboard to accept user inputs and/or a micrphone (with processing components associated) which provides voice control/input capability to the user. The user interface may be a combination peripheral devices (e.g. a keyboard, mouse and display screen) and display screens. In some embodiments the client 102 can include different implementations of user interfaces 124. (e.g. a first interface 124 to display patient risk scores, and a second interface 124 to display thresholds).
The network interface 126 is comprised of hardware, software and/or firmware that enables the client 102 exchange electronic data via the network 106 using a wired or wireless connection. The network interface 126, for example, may include a cellular transceiver or WiFi transceiver as well as transceivers that support one or more wireless communication technologies, such as 4G, and a wired Ethernet Adapter.
In some embodiments, the application of the user 128 (or any other software in the memory 122) can be used to display the output from ML models as well as receive user input. This input is then sent to the server 104. If the client 102 was a smartphone then the user app 128 could be a mobile application (e.g. an Android, iPhone, or another device). The user application 128, for example, may include computer executable instructions to render one or more GUIs via the user interface 124. It could also include instructions for receiving/retrieving data (e.g. map data), and displaying it in the GUIs. GUI screens can be interactive, allowing the user to perform a variety of functions. The user application 128 can, for instance, allow the user select options from a GUI display by using, for example one or more digits. The user application 128, for example, may allow a user to enter values (e.g. text or numeric data), using a software-based keyboard. In certain embodiments, the user may input hardware events (e.g. mouse scrolling and clicks). GUIs may receive keystroke and mouse events from the user and process them or transmit the events to the server for processing. The user interface may receive input events from the device (e.g. a touch screen), and then send them to the user application for processing. The user application 128, in general, allows the user to access features of the application without having to perform any programming.
The server 108 can include both hardware and software components. The server 108 can be implemented, for example, using one or multiple server devices. The server 108 is comprised of a processor 160 and a memory 162, an application for users 164, network interface 166 a training module model 168, model operation module model 170, as well as a database electronic 180. Further components may be included in the server 104. The server 108 can include computer-executable instruction for retrieving/receiving information relating to training and/or operating models (e.g. EHR data). The server 108 can also include instructions to receive requests for data from the client via the network 106 and/or instructions to receive responses from the clients 102 that include parameters (e.g. HTTP POST requests with electronic form data). The server 108 can retrieve data from the client 102, or database 180.
The processor 160 can be a single CPU (e.g. a central processor unit (CPU), or it may include multiple processors (e.g. a GPU and CPU). Memory 162 is an computer-readable non-transitory device or unit, or a collection of devices/units that can include persistent memory (e.g. hard disks) or non-persistent components. The memory 162 can store data that is generated or used by, for instance, the user application, the module of model training 168 and the module of model operation 170. Memory 162 can also be used to store untrained and/or trained models and information from the electronic database.
The user application 164″ may be an application configured for mediating communication between the client 102/server 104. The user application 164 can be a program (e.g. a web-based application that is executed by a server) which includes one or multiple application programming interfaces. The user application 164 may receive HTTP requests from the user app 128 (e.g. GET or POST requests). The requests can include parameters (e.g. dates, times and query commands/parameters). “The user application 164 can issue HTTP responses to user application 128 which may include response data.
Click here to view the patent on Google Patents.