Wednesday, May 1, 2019

Miller et al. 2018: Physicians Predict Readmission Just As Well as Computers

There is a great deal of interest in algorithms and artificial intelligence, including a recent CMS initiative from the Center for Innovation (CMMI) - here.

However, algorithms and other diagnostic tests (e.g. prognostic tests) may depend on assuming that you don't know any thing specific about a patient.  For example, what is the risk of recurrence of breast cancer?   A molecular test may predict pretty well, but there is also some predictive power in things you already know, like tumor size, grade, and number of positive lymph nodes. 
(It's a situation where overuse of p-values has been criticized, here.  For example, if the molecular test predicts outcomes p=.04 or p=.02 and the clinical criteria p=.06, the real added value of the molecular test may be small.)
Miller et al. explicitly looked at the ability of physicians (and nurses, and residents) to predict readmissions from clinical data.   The predictive ability closely matched or slightly exceeded that of a major digital algorithm, LACE.   (Physician AUC .69, LACE AUC .62). 

Of course, the LACE algorithm can do things physicians can't do, like score 20,000 EHRs at once.

Miller et al. here, open access.

Miller, W. D., et al. (2018). "Clinicians can independently predict 30-day hospital readmissions as well as the LACE index." BMC Health Serv Res 18(1): 32.
              BACKGROUND: Significant effort has been directed at developing prediction tools to identify patients at high risk of unplanned hospital readmission, but it is unclear what these tools add to clinicians' judgment. In our study, we assess clinicians' abilities to independently predict 30-day hospital readmissions, and we compare their abilities with a common prediction tool, the LACE index. METHODS: Over a period of 50 days, we asked attendings, residents, and nurses to predict the likelihood of 30-day hospital readmission on a scale of 0-100% for 359 patients discharged from a General Medicine Service. For readmitted versus non-readmitted patients, we compared the mean and standard deviation of the clinician predictions and the LACE index. We compared receiver operating characteristic (ROC) curves for clinician predictions and for the LACE index. RESULTS: For readmitted versus non-readmitted patients, attendings predicted a risk of 48.1% versus 31.1% (p < 0.001), residents predicted 45.5% versus 34.6% (p 0.002), and nurses predicted 40.2% versus 30.6% (p 0.011), respectively. The LACE index for readmitted patients was 11.3, versus 10.1 for non-readmitted patients (p 0.003). The area under the curve (AUC) derived from the ROC curves was 0.689 for attendings, 0.641 for residents, 0.628 for nurses, and 0.620 for the LACE index. Logistic regression analysis suggested that the LACE index only added predictive value to resident predictions, but not attending or nurse predictions (p < 0.05). CONCLUSIONS: Attendings, residents, and nurses were able to independently predict readmissions as well as the LACE index. Improvements in prediction tools are still needed to effectively predict hospital readmissions.


 


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.