The Trouble with LACE...
The LACE score is one of the most widely used tools to assess patients’ risk of early mortality or unplanned readmittance. Originally developed by Dr. Carl van Walraven, et al., this algorithm is a pioneer in this space and well-known for its simplicity, only requiring four pieces of information to assess risk: Length of inpatient stay, Acuity of admission, Comorbidities, and prior utilization of the Emergency department. Many hospitals find this transparency and ease of use appealing, but let’s take a closer look.
Is Simplicity Always the Best Answer?
A recent study by West Virginia University found that a powerful predictor is missing from the LACE score—how a patient’s care is financed. Our last post highlighted the value of context and this is a perfect example. LACE was originally developed in Canada where there is no need to consider who finances care given the country’s publicly-funded healthcare. However, that doesn’t hold for America. Variation in outcomes between government and privately funded care is widely studied and we know that insurance status can be used as a proxy for underlying socioeconomic characteristics, such as poor education and low income.
While Dr. van Walraven and colleagues did their due diligence to evaluate other factors, clinical and otherwise, the authors conclude that augmenting models with community-level data is “impractical to clinicians” since these data are not always attainable. NTELLISIGHTS recognizes the value of blending clinical data with geospatial, community, lifestyle, and behavioral attributes to assess each patient’s risk. And while this sounds complicated, we do all the heavy lifting—pairing your clinical information with exogenous data points, calculating risk for each individual patient, and seamlessly embedding that information into our care coordination workflow application.
Modeling for Explanation vs. Prediction
The developers of LACE relied heavily on the statistical significance of each individual risk factor in order to include it in the final algorithm. This approach occurs more often in research and academic settings and is more commonly referred to as modeling for explanation.
Modeling for explanation ensures that each risk factor is relevant while controlling for other factors. And while it allows for isolating and quantifying impact, it doesn’t necessarily produce the most accurate predictive model. On the other hand, modeling for prediction aligns the evaluation criteria of a model to its business objective. In other words, if aiming to predict readmissions, the evaluation criteria should be the precision and recall ability of that model on readmission, not the significance of each individual factor in the model.
Efforts that model for explanation, such as LACE, possibly omit valuable information that may help predict readmittance. Just think of all the patient-level and community-level data that are now available to estimate risk—from EHRs, from patient assessments, and even publicly available data.
While these omissions and approaches may seem trivial, it begs the question of what more could be done. NTELLISIGHTS goes beyond LACE in an effort to account for the whole person, realizing that often times a patient’s risk of readmission occurs outside the walls of a hospital, and that clinical interventions are not the only available solution. Armed with these additional data, we are able to increase precision over LACE by 50%. What does this mean for a client? It means that resources are used more appropriately, and the proper patients have the opportunity to receive the proper intervention.
Modifiable Risk Factors
LACE has been a staple in predicting risk of readmittance, but it leaves something to be desired in terms of actionability. For instance, a patient’s length of stay can be indicative of multiple things, such as patient complexity, insurance benefit allotment, or clinician workflow or bandwidth. Furthermore, some strategies to reduce LOS may be in opposition to quality and care guidelines or policies.
Acuity (i.e., admission through the ED) and prior utilization of ED are both information points that have already occurred. A clinician cannot modify these risk factors for the admission at hand. And while knowing this information can help prompt when it is pertinent to educate the patient on the value of having a primary care provider, it doesn’t necessarily provide the detail needed to address the underlying health, social, or access issues of why patients seek care from ED facilities.
Every readmission prediction model on the market includes comorbidities, but the level of detail is often lacking to develop the proper intervention strategies. For instance, two CHF patients may need different interventions to help properly mitigate risk. Patient A may need to simply adhere to their beta blockers more stringently while Patient B may need a stricter diet and exercise regimen. In order to treat at this level, analytic inputs must be more granular. NTELLISIGHTS analyzes and tees up actionable risk factors so the clinical care team has more information at their disposal to make a more meaningful impact.
Model Drift & Applicability
The famous statistician George Box famously said “all models are wrong, but some are useful.” That can be interpreted to indicate there are limits to every model and how it can/should be applied. The minds behind LACE acknowledge its limitations stating, “the index cannot be used reliably in patient populations that were not involved in its derivation.” LACE was built using data incurred between 2002 and 2006 for patients in the province of Ontario. This sample population makes it hard for the index to be generalizable to other demographic, regional, and clinical profiles. Moreover, the calculation of LACE is static, and has not changed since its inception in 2010, with the exception of the release of LACE+. The accuracy of this revision exceeds its predecessor but was also built with a Canadian population with data prior to 2009.
The point of the story…algorithms such as LACE and LACE+ are not always appropriately fit to detect differences in populations and subsequently chance misclassifying high risk patients. The authors themselves explicitly state they do not recommend using these algorithms in decision-making at the patient level, but rather only for outcomes research. NTELLISIGHTS has built a scalable platform that allows our data scientists to custom build and calibrate machine learning models for each client’s patient population resulting in more pertinent recommendations and generating better outcomes.
So What Now?
Dr. van Walraven and colleagues are considered giants in this space for the development of LACE and its relatability. We stand on the shoulders of these giants and add to their insight and innovation. We place clients as the focus of our analytics and leverage vast amounts of data to estimate risk, focusing on the ability to predict readmissions for each client’s patient population. We aim to highlight a patients’ modifiable risk factors and provide actionable workflows and resources to address them. Let’s get started today!