First-pass strategic take
Yes, EOM is meaningfully different from OCM, but not
enough to be reassuring. It is better designed, narrower, more
patient-centered, more equity-aware, and more digitally ambitious. But the
early evidence still looks like OCM 2.0 rather than a true new operating
system for value-based cancer care: modest payment reductions, no visible
quality/utilization gain yet, and net losses once care-management and incentive
payments are counted.
That is actually a strong panel premise: VBCC has not
failed as an aspiration; it has failed because the measurement layer is still
too weak, too claims-centric, and too poorly connected to what oncology
patients and clinicians actually experience.
1. OCM vs EOM: what changed, and should we be reassured?
OCM: the disappointment
The final Brooks/JAMA paper is devastating in a quiet,
technocratic way. OCM ran from 2016–2022, included more than 200
practices, covered about one-fourth of systemic cancer treatment in FFS
Medicare, and produced a statistically significant but modest $616
reduction per episode, increasing to $1,282 in the final performance
period. But there were no statistically significant differences in
hospitalizations, ED visits, or quality, and after MEOS and performance
payments, Medicare had an estimated $639 million net loss.
That says: OCM learned how to bend some spending, but not
enough to pay for itself, and not with a measurable patient-quality signal.
The Thomas/Ward critique anticipated part of this: OCM was not really a true
bundle; it preserved FFS and layered on a complicated shared-savings/payment
overlay, leaving practices partly paid to do more and partly rewarded for doing
less.
EOM: the real improvements
EOM is not just a rename. It makes several real changes:
|
Design feature |
OCM |
EOM |
|
Cancer scope |
Broad systemic therapy episodes |
Narrowed to seven cancer types |
|
Hormonal therapy-only episodes |
Included in OCM |
Generally excluded; focuses on systemic chemotherapy, not
hormonal-only |
|
Risk |
Shared savings; downside risk evolved |
Downside risk from the start |
|
MEOS |
$160 PBPM |
$110 PBPM, or $140 PBPM for dual eligibles |
|
Equity |
Less central |
HRSN screening, equity plans, dual-eligible
enhanced payment |
|
Patient-reported outcomes |
Not central |
ePRO collection and monitoring required |
|
Digital/data strategy |
Less mature |
Requires quality, clinical, and sociodemographic data;
CEHRT and CQI use |
|
Participation |
>200 practices in OCM |
As of March 2026, 28 practices and 1 commercial payer |
CMS describes EOM as focused on seven cancers, six-month
episodes, total-cost accountability, MEOS payments, required 24/7 access,
navigation, evidence-based guidelines, comprehensive care plans, HRSN
screening, ePROs, CQI data use, and certified EHR technology. (Centers for Medicare &
Medicaid Services)
The non-reassuring part
The first EOM evaluation already has a familiar smell. CMS’s
first at-a-glance report for Performance Period 1, July–December 2023,
says EOM likely reduced payments but produced a net loss to Medicare
after MEOS and incentive payments, and did not affect quality or utilization
measures, including hospice use before death or acute care utilization. (Centers for Medicare & Medicaid Services)
That is the core answer: EOM is more sophisticated, but
the early signal is still not a proof of VBCC. It is a better-engineered
experiment, not yet a successful value model.
2. Are there new and better metrics in EOM?
Yes, but with a big caveat. EOM adds the right categories
of measurement, but many are still not mature, outcome-forward, or digitally
reliable enough to bear payment accountability.
The good news is that EOM formally moves beyond pure claims
logic. It requires ePROs, HRSN screening, clinical data
elements, sociodemographic data, and quality reporting. CMS’s
participant resources now include an EOM Clinical Data Elements Guide,
Sociodemographic Data Elements Guide, Quality Measures Guide, and cost/quality
performance data files for Performance Periods 1–2. (Centers for Medicare & Medicaid
Services)
Your prior measurement report frames this well: the
plausible springboard is the convergence of EOM redesign requirements, digital
quality measures, and oncology interoperability infrastructure such as USCDI+
Cancer and mCODE.
But the caveat is decisive: EOM has better measurement
ingredients, not yet better measured outcomes. The model asks for ePROs and
HRSN data, but the early evaluation did not show measurable improvement in
acute care utilization or quality. (Centers for Medicare & Medicaid Services)
The most promising domains for real VBCC metrics are the
ones you already identified:
ePRO symptom/toxicity control. This is probably the
single strongest new metric domain because it connects directly to patient
experience, acute care avoidance, and clinical response workflows. Your report
proposes measuring both completion of standardized symptom assessments and timely
response to severe symptom alerts.
Physical function preservation. More meaningful than
“patient satisfaction,” and closer to what cancer patients care about: can I
function, work, walk, sleep, eat, and live my life?
Avoidable acute care utilization. Still useful, but
claims alone are crude. ED visits and admissions become more meaningful if
paired with symptom-triggered preventability review or ePRO context.
Evidence-based regimen/pathway concordance. This
could matter greatly, but it requires structured stage, biomarkers, treatment
intent, and exception logic. Otherwise, it becomes a documentation game.
Goal-concordant end-of-life care. Claims can measure
late chemotherapy and hospice timing, but the real outcome is whether care
matched patient goals. Your report rightly pairs claims-based EOL metrics with
structured goals-of-care documentation.
Financial toxicity. This is a missing VBCC domain. If
cancer care bankrupts or destabilizes a patient, “value” has not been achieved.
Your report proposes validated financial toxicity screening plus navigation
response.
So the answer is: EOM points toward better metrics, but
it has not yet demonstrated better outcomes. It is more promising as a
measurement platform than as a proven payment model.
3. What should “V3” of CMS value-based oncology look
like?
Your strongest forward-looking thesis is:
OCM tested whether care-management payments plus shared
savings could make oncology cheaper. EOM tests whether a narrower,
risk-bearing, equity-aware model with ePROs can do better. V3 should test
whether digitally computable, patient-centered oncology outcomes can finally
become the basis of value-based cancer care.
In other words, V3 should not be merely EOM with
different benchmarks. It should be a model where measurement is the
product.
V3 should have five design principles
First, V3 should make oncology clinically legible. It
cannot rely mainly on claims. It needs structured diagnosis, stage, biomarkers,
line of therapy, treatment intent, progression/recurrence, performance status,
and death date. EOM’s clinical data element work and mCODE/FHIR alignment are
early scaffolding, but V3 should make this the required substrate. Your report
puts it neatly: oncology’s real clinical context is often buried in notes and
cannot be reliably inferred from claims.
Second, V3 should use AI/NLP as measurement
infrastructure, not as a magic wand. The panel topic is strongest if you
say: AI review of EHRs may finally allow VBCC to measure what claims
cannot—stage, progression, toxicity, treatment intent, ECOG-like function,
adverse events, biomarker appropriateness, and goals-of-care discussions. But
AI-derived metrics must be validated, audited, version-controlled, and
bias-tested. Otherwise V3 becomes a digital façade: computable, impressive, and
wrong.
Third, V3 should use a small core measure set.
Something like 8–12 measures, not 50. Candidate domains: symptom control,
functional preservation, avoidable acute care, evidence-based regimen
appropriateness, time to treatment, EOL goal-concordance, financial toxicity,
and equity/whole-person supports. That aligns with your prior report’s proposed
VBCC portfolio.
Fourth, V3 should separate “drug price exposure” from
“care delivery performance.” OCM and EOM struggle because oncology spending
is dominated by therapies whose prices and clinical indications are not fully
controlled by the practice. EOM’s early report says systemic cancer treatment
drug spending accounts for about 58% of EOM episode costs, and
participants reported focusing on drug spending interventions. (Centers for Medicare & Medicaid Services)
V3 should distinguish: Did the practice choose appropriate therapy? Did it
manage toxicity? Did it avoid preventable acute care? Did it align care with
patient goals? It should not simply punish a practice because the correct
therapy is expensive.
Fifth, V3 should make equity measurable without making
safety-net care financially toxic to providers. EOM adds HRSN screening and
dual-eligible enhanced MEOS payments, which is directionally right. But V3
should require equity stratification and closed-loop resource referral metrics
while protecting practices that care for medically and socially complex
populations. CMS describes EOM as requiring HRSN screening, equity plans,
expenditure/utilization reports to identify disparities, and higher MEOS
payments for dual eligibles.
A sharper conference thesis
Here is the central framing I would use:
“Value-based cancer care has been stuck in an awkward
middle stage: payers can measure cost, but not value; clinicians can describe
value, but not compute it; patients can feel value, but it rarely appears in
payment models. OCM showed that care redesign can produce modest savings
without measurable quality improvement. EOM improves the model by adding
downside risk, ePROs, health-related social-needs screening, and richer data
requirements, but early results still look financially and clinically inconclusive.
The next version of VBCC will depend less on another tweak to shared savings
and more on whether AI-enabled EHR review, ePROs, mCODE/USCDI+ Cancer, and
digital quality measures can produce auditable, patient-centered,
oncology-specific metrics at scale.”
That is a strong panel because it is neither naïvely
optimistic nor drearily cynical.
Draft paragraph-long panel proposal
Are We Finally Ready for Real Value-Based Cancer Care?
From OCM and EOM to AI-Enabled Measurement.
For more than a decade, value-based cancer care has promised to reward better
outcomes rather than higher volume, yet progress on the ground has remained
limited. The Oncology Care Model produced modest reductions in Medicare episode
payments but no significant improvements in utilization or quality, and net
losses to Medicare after model payments. Its successor, the Enhancing Oncology
Model, adds important improvements—downside risk, narrower cancer scope,
electronic patient-reported outcomes, health-related social-needs screening,
enhanced services, and richer clinical data requirements—but early results
remain inconclusive. This panel will ask whether the missing ingredient has
been measurement itself: the ability to capture oncology stage, biomarkers,
treatment intent, toxicity, function, goals of care, financial toxicity, and
equity outcomes in computable, auditable form. We will explore whether AI-enabled
EHR review, ePROs, mCODE/USCDI+ Cancer, and digital quality measures can
support a true next-generation “V3” oncology value model—one that moves beyond
claims-based cost control toward patient-centered, clinically meaningful cancer
care performance.
Possible panel title options
Best straightforward title:
From OCM to EOM to V3: Can Better Measurement Finally Make Cancer Care
Value-Based?
More provocative:
Value-Based Cancer Care’s Missing Operating System: Metrics, AI, and the
Road Beyond EOM
Most “conference program” friendly:
The Next Generation of Value-Based Cancer Care: Lessons from OCM, Early EOM,
and AI-Enabled Outcome Measurement
Most Bruce-style:
After OCM and EOM: Is Value-Based Cancer Care Still Waiting for Its
Measurement System?
Suggested panel architecture
You as moderator/host should set up the tension: “We
have had a decade of VBCC conferences, but the field still often means payer
coverage plus ASP drug pricing. What would make it real?”
Ideal panelists:
- OCM
evaluation author
Gabriel Brooks would be excellent, because the JAMA paper is now the cleanest empirical anchor. Nancy Keating would also be very strong, especially for evaluation design and interpretation. The Brooks paper lists Brooks, Trombley, Landrum, Liu, Simon, and Keating among key authors, with Brooks, Trombley, and Keating drafting the manuscript. - CMMI/EOM
representative
Someone who can speak to EOM’s design choices, first evaluation, clinical data elements, ePROs, HRSN screening, and what CMS hopes to learn by 2030. Even if they cannot speculate about “V3,” they can discuss what EOM is designed to test. - Community
oncology practice leader
Preferably someone actually implementing EOM, ePROs, navigation, urgent care access, and “call us first” workflows. This prevents the panel from becoming a policy seminar in the clouds. - Measurement/interoperability
person
Someone from ONC/ASTP, NCI’s USCDI+ Cancer effort, HL7/mCODE, NCQA, or a serious EHR/FHIR implementer. Their role: what needs to be standardized before oncology outcomes can be measured without heroic chart abstraction? - Patient-centered
outcomes / financial toxicity voice
Ideally someone working on ePROs, symptom monitoring, financial toxicity, or patient advocacy. This keeps the panel honest: VBCC should not become “cost-based cancer care with nicer adjectives.”
My bottom-line answer to your three questions
OCM vs EOM: EOM is a real improvement in design, but
not yet reassuring in results. It narrows the model, adds downside risk,
requires ePROs and HRSN screening, and builds a richer data strategy. But early
EOM still shows payment reductions offset by participant payments and no clear
quality/utilization improvement.
Metrics in EOM: EOM has better metric
ingredients—especially ePROs, HRSN screening, clinical data elements,
sociodemographic data, and CQI reporting—but the decisive step from “data
collection” to “valid outcome measurement” is still ahead.
V3 forecast: V3 should be an
AI/digital-measurement-enabled oncology value model: clinically rich,
patient-centered, equity-stratified, auditable, and built on structured
oncology data plus validated AI extraction from EHR text. The goal is not
simply to save money on chemotherapy episodes, but to measure whether cancer
care actually improves symptoms, function, appropriateness, goal-concordance,
financial well-being, and avoidable acute care—without drowning practices in
manual reporting.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.