Developing risk prediction tools can be resource- and time-intensive. And sometimes, merely developing a working tool isn’t enough to see results. In this randomized clinical trial of heart failure patients, Yale researchers found that providing clinicians with 1-year mortality risk estimates did not improve patient outcomes nor influence decision-making.
Researchers randomized 3k heart failure patients into “usual-care” and “alert” groups. In the interventional alert group, predicted 1-year mortality rate and other relevant risk information was displayed to clinicians when they opened the order-entry portion of patients’ medical records.
Over a 384-day median follow-up, outcomes between the alert and usual-care groups were nearly identical for:
- 1-year mortality (27.1% vs. 26.1%).
- 30-day hospitalization (19.4% vs. 20.7%).
- Length of hospital stay (4.4 days vs. 4.3 days).
- Prescription of heart failure therapies at discharge (48.2% vs. 48.1%).
- Rates of palliative care referral (10.3% vs. 10.7%).
- Rates of advanced therapies like cardiac transplants and defibrillator implants.
One explanation for these findings could be that the risk estimates did not add substantially to clinician intuition, thus diminishing their potential benefit. But the authors believe that the null findings are more likely due to “algorithm aversion” – when clinicians favor their intuition over statistical algorithms, even if the algorithms are “objectively superior.”
Identifying high-risk patients is critical for managing heart failure. And most of us probably agree that providing risk prediction scores so clinicians can tailor patient interventions is a good thing. But this study shows that these initiatives may not always work as planned, and taking the time to evaluate their efficacy is an essential step toward enacting successful real-world solutions.