One issue with MAAA-type prognostic tests is that we can a group of patients - say, patients with PSA of 4 to 10 - and put them into a bucket (like putting marbles into a barrel and not being able to see their colors.) Then, we take the marbles out of the barrel one at a time, and apply the new prognostic test, and classify them as lower and higher risk. The obvious problem is you already KNEW that the PSA 4 patients had much lower risk than PSA 10 patients (and, for example, PSA 11 patients have such high risk you wouldn't even consider them for this test.) Yes, the new test may re-stratify the patients somewhat better than PSA=4 and PSA=10; but how much better can be difficult to decide.*
Andrew Vickers of Memorial Sloan-Kettering has worked on this issue in a number of very interesting papers.
- See a 2009 interview about his ideas on prognostic test clinical utility, in Cancer Network, here.
- For three open access articles on predictive/prognostic tests and separating value from hype, here, here, here.
- For all Vickers AJ articles at PubMed, here.
- For his MSK web page, here.
For another example of Vickers' critical thinking, see "Validating Patient Reported Outcomes: A Low Bar," 2019, here. (Discusses in part this article here.)
____
*Another issue is when the MAAA test contains age and perhaps quite a bit of other clinical data in the algorithm. On the one hand, this probably makes the MAAA test more accurate in absolute terms. That's good, in that viewpoint, it's a bad idea to leave out info that makes the test more accurate. On the other hand, you already knew that patients age 75 have more chance of (say) coronary disease than patients 45, if there's a part of the test that's worth $1500, it's not that.
E.g. see Panoptic test in LDCT screening for lung cancer, here. Op Ed here.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.