Wednesday, April 8, 2026

Rural Health Technology Links

 

Revisiting the rural and urban divide in hospital health ... - PMC

by AS Yan2025Cited by 1 — However, adoption alone is not enough. Ensuring that these technologies translate into equitable use remains a critical challenge. Sustaining ...Read more




J Rural Health. 2025 Dec 10;41(4):e70100. doi: 10.1111/jrh.70100
Revisiting the rural and urban divide in hospital health information technology adoption
Evidence from 2023
Alice S Yan 1,2,3,4,✉, Teagan K Maguire 1,2,3,4, Jie Chen 1,2,3



Mar 24, 2026 — → It comes as the Trump administration is pushing the increased use of AI to help transform rural health. Health Secretary Robert F. Kennedy Jr.Read more
Apr 9, 2025 — Technological gaps handicap rural hospitals as billions in federal funding to modernize infrastructure lags. The reliance on outdated ...
Digital technology has the potential to help with this and other communications problems. Patients no longer need to visit doctors' offices to be reminded to ...Read more
Missing: enough ‎| Show results with: enough
Mar 30, 2026 — Rural hospitals that approach telehealth this way consistently see improvements across patient outcomes, clinician satisfaction, and financial ...Read more

by ME Frisse2005Cited by 6 — The costs that these organizations face when contemplating necessary information technology seem formidable to the point of not being affordable.Read more
Missing: enough ‎| Show results with: enough
Rural healthcare providers face three primary IT hurdles: Lack of broadband access, telehealth availability and overall system cost constraints.Read more

Some Interesting papers, RWE & bridging & Extern controls

 FDA publishes 73 examples of RWE in devices.

Acacia Parks at Linked In

https://www.linkedin.com/posts/acacia-parks_digitaltherapeutics-realworldevidence-rwe-share-7445948835689820160-bQgL/


2015 FDA paper by Li on Bridging Studies in precision medicine

https://www.linkedin.com/posts/cdxbridgingli2015aianalysispdf-ugcPost-7446171769121722368-Ouel/


Prinzi-Biomerieux on Quinn citation re Shannon Theory

https://www.linkedin.com/posts/andreaprinzi_horizons-in-diagnostics-value-case-study-activity-7445981210818002945-SEbx/

Friends of Cancer Research on External Control Arms

https://www.linkedin.com/posts/eca-pilot-project-overview-and-preliminary-ugcPost-7444819907130626048-ulm8/

This is an April 7 2026 workshop and there is an online two pager at Linked In


SCANNERS - How important are brand differences?  

https://www.linkedin.com/posts/hdcouture_%F0%9D%98%BD%F0%9D%99%9A%F0%9D%99%AE%F0%9D%99%A4%F0%9D%99%A3%F0%9D%99%99-%F0%9D%99%A9%F0%9D%99%9D%F0%9D%99%9A-%F0%9D%98%BD%F0%9D%99%9A%F0%9D%99%A3%F0%9D%99%98%F0%9D%99%9D%F0%9D%99%A2%F0%9D%99%96%F0%9D%99%A7%F0%9D%99%A0-%F0%9D%99%8E%F0%9D%99%A9%F0%9D%99%A7%F0%9D%99%9A%F0%9D%99%A8%F0%9D%99%A8-%F0%9D%99%8F%F0%9D%99%9A%F0%9D%99%A8%F0%9D%99%A9%F0%9D%99%9E%F0%9D%99%A3%F0%9D%99%9C-share-7446874370423271424-k0G2/


https://pathsocjournals.onlinelibrary.wiley.com/doi/full/10.1002/2056-4538.70080

2026 JPCR Chai  Impact of tissue stainer AND scanner in sarcoma

Tuesday, April 7, 2026

Chat GPT 20-page report on Value in Cancer Care, Today through 2030

18 pages as PDF

4500 words

Cut-paste below

Chat GPT "Deep Research" mode (45 minutes)

##

Value-Based Cancer Care Measurement to 2030

Executive summary

Purpose. This report is designed to help plan a fall conference panel on measurement as the “missing operating system” for value-based cancer care (VBCC): why progress has been limited, what measures would actually enable VBCC, what is missing today, what a tangible future state looks like, and how to get there by 2030. The analysis prioritizes primary/official sources (CMS/CMMI, ONC/ASTP, NCI, NCQA, HL7/mCODE, ICHOM) and peer‑reviewed evidence (notably Basch ePRO trials). [1–15]

Central finding. VBCC has not failed for lack of payment experimentation; it has stalled because measurement has remained too claims-centric, too process-heavy, too misaligned across programs, and too weakly connected to patient-centered outcomes and real oncology clinical context—while also imposing high reporting burden and inviting gaming. The Oncology Care Model (OCM) is the clearest signal: practices reported substantial care redesign, yet accountability quality measures showed no significant improvement versus comparison groups, and practice-reported process gains did not translate into patient-reported or claims-based outcome gains. [2–3] At the same time, patient experience scores were already high (a ceiling effect), and the COVID public health emergency complicated several patient-reported domains. [2–3]

Why this moment is different. The plausible “springboard” is the convergence of (a) CMS’s next-generation oncology model requirements (EOM’s eight participant redesign activities include ePROs, HRSN screening, CEHRT use, and CQI data use), [4–6] (b) the federal shift toward digital quality measures (dQMs) and alignment (Meaningful Measures 2.0; Universal Foundation), [10–12] and (c) oncology-specific interoperability infrastructure—USCDI+ Cancer (ONC + NCI, with CMS/CDC/FDA input) and HL7 mCODE as a minimal structured oncology dataset with FHIR-based exchange. [13–15] Together, these can reduce abstraction burden and make patient-centered outcomes measurable at scale.

What VBCC measurement should become. A credible VBCC measurement portfolio should be small (≈8–12 measures), outcome-forward, equity‑stratified, and digitally computable from interoperable data. This report proposes 8 candidate measures spanning: symptom/toxicity control via ePROs, functional status, avoidable acute care, evidence-based regimen appropriateness, timeliness, end-of-life (EOL) goal-concordant care, financial toxicity, and equity/whole-person supports. Each is defined with required data elements and risk-adjustment needs, and compared in a single table (below). [5–9], [13–18]

2030 future state in one sentence. By 2030, a “VBCC-ready” system can (1) capture core oncology facts (diagnosis, stage, key biomarkers, therapy intent) in standardized fields (mCODE/USCDI+ Cancer), [13–16] (2) routinely capture ePROs and respond clinically, [5–9] and (3) compute a core set of dQMs that are aligned across payers, auditable, equity‑stratified, and used for rapid-cycle improvement—not merely reporting. [10–12]

Assumptions. The exact conference date, panel duration, and confirmed panel format are not specified; this report assumes a 60–90 minute panel with a policy/technical audience and the term “VBCC website” refers to the trade publication Value-Based Cancer Care (valuebasedcancer.com). [19]

Why decades of work produced limited progress

VBCC is often defined as improving outcomes that matter to patients per dollar spent. That concept is clear; what has been unclear is which outcomes to measure, how to compute them reliably, and how to avoid drowning clinicians in reporting. Porter’s foundational framing underscores that value requires outcomes measurement, not just costing or utilization tracking. [1]

Claims-first measurement shaped incentives toward what is easiest to count. Early oncology “value” programs leaned heavily on claims-based utilization and narrow process metrics (e.g., ED visits, hospitalizations, hospice timing) because these are available and standardizable. In OCM, CMS held practices accountable on several such measures, yet the final evaluation found no significant improvement versus comparison groups on the accountable quality measures, even as practices reported substantial redesign work. [2–3] This fuels the “emperor has no clothes” reaction: lots of effort, weak signal of better patient outcomes.

Misalignment and measure proliferation diluted focus and comparability. CMS acknowledges the burden and fragmentation problem through Meaningful Measures 2.0 and the Cascade of Meaningful Measures, explicitly aiming to reduce burden, align measures, and prioritize what matters. [11–12] The Universal Foundation is CMS’s attempt to streamline high-priority measures across programs; a parallel NEJM analysis describes the need for cross-program alignment because CMS historically used hundreds of measures that were not always aligned. [12], [20]

Oncology’s clinical reality is not “measurement-ready.” Valid comparisons require clinical context—stage, biomarkers, line of therapy, performance status, recurrence/progression—and longitudinal follow-up. These elements are inconsistently structured in EHRs, often buried in narrative notes, and are not reliably inferable from claims. This is precisely why USCDI+ Cancer and mCODE exist: to standardize the oncology dataset and enable interoperable exchange. [13–16]

Attribution and episode definitions are structurally hard in cancer care. Cancer patients traverse surgeons, medical oncologists, radiation oncologists, hospitals, and ancillary services; attributing outcomes or costs to a single entity can be unstable. Empirical work on newly diagnosed cancer patients highlights attribution challenges that must be addressed for accurate quality measurement and payment design. [21] Episode-based oncology payment model design also faces definitional challenges because episodes must be observable from claims and clinically meaningful—often a tension. [22]

Burden and workflow misfit undermined “measure-to-improve.” Measurement has too often been “measure-to-report.” Meanwhile, oncology EHR workload has increased: a national analysis of oncology specialists’ EHR inbox work reported a 19% increase in weekly inbox messages from 2019 to 2022, with high burdens in medical oncology/hematology—making additional manual measurement particularly fragile. [23]

Measures that would support VBCC

A VBCC measure set should satisfy four design tests:

1) Patient-centered outcome linkage: directly reflects symptom burden, function, survival proxies, goal-concordant care, or financial well-being. [1], [5–9], [18]
2) Clinical actionability: results can trigger care redesign (navigation, toxicity management, regimen selection, end-of-life conversations). [4–6]
3) Computability at scale: feasible from claims + standardized EHR data + ePROs, using dQM specifications where possible. [10–12], [13–16]
4) Fairness and auditability: explicit risk adjustment and guardrails against selection, coding inflation, and exception misuse. [10–12], [24]

Candidate measure comparison table

Candidate measure (domain)

Precise definition (example specification)

Key data elements required

Primary data source(s)

Computability today

Major barriers / risk-adjustment needs

ePRO symptomatic toxicity control (patient-centered outcomes)

Among patients initiating systemic therapy in the measurement period, % completing standardized ePRO symptom assessments at defined intervals and % of severe symptom alerts with documented clinical follow-up within 48 hours

Patient identifier; therapy start date; standardized symptom instrument (e.g., PRO‑CTCAE/PROMIS domains); timestamped alert; follow-up action

ePRO platform + EHR; partially claims (therapy trigger)

Medium (increasing in EOM)

Workflow integration; standard alert logic; missing PRO completion in underserved groups; risk adjust by cancer type, regimen intensity, baseline symptom burden, language access. [5–6], [9], [11]

Physical function preservation (function)

Mean change (or % with clinically meaningful decline) in PROMIS physical function score from baseline to 3 months after therapy start

Baseline and follow-up PROMIS PF; therapy start; demographics

ePRO + EHR

Low–Medium

Baseline capture; instrument licensing/workflow; case-mix adjustment; ensure accessibility for older/disabled patients. [11], [5–6]

Avoidable acute care utilization during episodes (utilization/outcomes proxy)

Risk-adjusted rate of ED visits not leading to admission per 6‑month episode; optionally paired with symptom-triggered preventability review

Episode attribution; ED visit claims; admission linkage

Claims (CMS/payer)

High

Attribution; preventability not captured; risk adjust by cancer type/stage, comorbidity, social risk. [2–4], [21–22]

Evidence-based regimen appropriateness / pathway concordance (appropriateness)

% of new regimens consistent with specified evidence-based guidelines/pathways given stage/biomarkers (with documented exceptions)

Diagnosis; TNM stage; key tumor markers (e.g., ER); regimen and dosing; exception reason

EHR orders; pathway system; mCODE-aligned oncology data

Medium

Proprietary pathway definitions; incomplete structured stage/biomarkers; gaming via exception overuse; risk adjust by disease subtype and treatment intent. [16], [15], [25]

Timeliness from diagnosis/staging to treatment initiation (timeliness)

Median days from “initial diagnosis date” to initiating cancer therapy; stratify by cancer type and stage (and by referral source)

Date of diagnosis; date of therapy; stage; referral/consult timestamps

EHR + registry + claims

Low–Medium

Cross‑org data; ambiguous definitions; staging completion dates; risk adjust by complexity, access constraints. USCDI+ Cancer includes timeliness-related use cases. [13–14], [16]

Goal-concordant end-of-life care (EOL quality)

% receiving systemic therapy in last 14 days of life; % hospice enrollment ≥7 days before death; paired with goals-of-care documentation rate

Date of death; therapy claims; hospice claims; (optional) structured goals-of-care

Claims + EHR

High (claims) / Low (goals)

Death date availability; clinical nuance (appropriate late therapy in select cases); gaming by shifting care settings; adjust for cancer trajectory and patient preference. [2–3], [26]

Financial toxicity screening and navigation response (financial well-being)

% screened with validated instrument (e.g., COST) within 60 days of therapy start; among “high toxicity,” % receiving financial navigation within 30 days

COST score; therapy start date; navigation referral and completion

ePRO + EHR + revenue-cycle systems

Low–Medium

Many systems lack standardized workflows; risk adjust by baseline socioeconomic status; avoid penalizing safety-net providers; ensure language/cultural validity. [18], [6], [11]

Equity and whole-person supports (equity/HRSN)

% screened for HRSN domains and % with closed-loop resource connection; report key outcome measures stratified by race/ethnicity, dual-eligibility, and neighborhood disadvantage

Demographics; HRSN screening results; referral; completion; stratification variables

EHR + community resource platforms + claims

Medium (in EOM)

Data completeness; standard capture of race/ethnicity/language; accountability for community resource availability; risk adjust for structural barriers, not just clinical factors. [4–6], [13], [27]

What is lacking today

Data and standardization gaps

Structured oncology facts remain inconsistent across EHRs. CMS’s EOM Clinical Data Elements (CDE) Guide illustrates the minimum clinical detail needed even for a limited set of models: diagnosis date, death date, recurrence/relapse status, metastatic history, TNM staging, and tumor markers (e.g., ER for breast cancer), mapped to mCODE/FHIR elements for high-tech submission. [16] The existence of such guidance is progress, but it also highlights the current reality: many practices cannot reliably compute nuanced measures because these fields are either missing, inconsistently modeled, or unstructured.

Interoperability is improving, but unevenly adopted. ONC’s Cures Act Final Rule pushes standardized APIs and addresses information blocking to enable growth of interoperable apps and data use. [28–29] Yet, interoperability alone does not guarantee semantic consistency (same meaning, same code sets, same timestamps), which is required for measure validity.

Workflows, burden, and the “last mile” problem

ePROs have strong evidence but weak operational penetration. Randomized trials by Basch and colleagues show ePRO symptom monitoring can improve quality of life, reduce symptom burden, reduce acute care use, and in some studies improve survival. [7–9] However, scaling requires (a) patient enrollment and sustained completion, (b) alert triage protocols, (c) EHR integration, and (d) clinical response capacity—each a failure point.

Reporting burden competes with improvement. Rising oncology EHR message volume and “work outside work” time heighten the risk that new measures become administrative tasks rather than improvement tools. [23] VBCC measurement that is not digitally computable risks worsening burnout and undermining adoption.

Equity, patient-reported outcomes, and financial toxicity remain under-measured

Equity stratification is more policy requirement than measurement norm. EOM requires screening for HRSNs and embeds health equity-related redesign activities. [4–6] USCDI+ Cancer explicitly aims to define core real-world data elements to support care, research, and interoperability with cross-HHS involvement (NCI + ONC, with CMS/CDC/FDA input). [13] Yet, in practice, demographic and social risk data are incomplete or inconsistent, and closed-loop referral outcomes are rarely captured in standard fields.

Financial toxicity is measurable but not systematized. COST (FACIT-COST) is validated as a patient-reported measure of financial toxicity in cancer. [18] Despite this, most measurement programs still do not treat financial toxicity as a core VBCC outcome with defined numerator/denominator logic and accountability for navigation response.

Attribution and gaming risks

Attribution is an “engineering constraint,” not an afterthought. If quality measures are tied to episodes or entities that cannot be attributed consistently, the measurement signal becomes noise. Evidence on patient attribution in newly diagnosed cancer underscores these challenges for accurate quality measurement and payment. [21] Episode-based payment design literature similarly identifies the need for observable, well-defined intervals and compatible quality measurement. [22]

Gaming risks are real and predictable. When measures are process-based or loosely specified, organizations can optimize documentation, coding, and exception pathways rather than outcomes. Digital measures help only if paired with transparent specifications, version control, auditing, and validation.

Future-state VBCC measurement vision to 2030

A tangible future state is best described as three synchronized workflows: clinic, payer/program, and patient.

Clinic workflow

In a “VBCC-ready” clinic, care teams do not “report measures”; they run care workflows that inherently generate computable data:

·       Core oncology clinical facts (diagnosis, stage, key markers, treatment intent) are captured in consistent structured fields and exchanged via FHIR/mCODE-aligned profiles. [15–16]

·       ePROs are routine during systemic therapy, with standardized instruments and triage protocols, and ePRO data are visible in the EHR and used in daily operations. [5–6], [7–9]

·       Navigation, HRSN screening, and health equity plans are integrated into the episode pathway and monitored as operational KPIs. [4–6]

Payer/program workflow

·       A small aligned measure set is computed as dQMs (standards-based specifications, code packages, interoperable data) from multiple sources (claims + EHR/FHIR + ePRO + registries). [10–12]

·       Equity stratification is standard in reporting dashboards, and incentives are structured to avoid penalizing safety‑net practices for patient risk and structural barriers. [4–6], [13]

·       Measures are paired with learning: quarterly feedback cycles, targeted supports, and measurement updates with governance.

Patient workflow

·       Patients know what “value” means operationally: symptom control, function, goal-concordant care, financial well-being, and fairness.

·       Patients can contribute data via portals/apps without friction, and they see feedback loops (“you reported severe nausea; nurse called; antiemetic adjusted”). [5–6], [7–9]

·       Patients can access and share oncology data across systems due to standardized APIs and reduced information blocking. [28–29]

Milestones and timelines to 2030

CMS’s EOM runs through June 30, 2030, providing a real-world runway for maturing measurement and digital reporting pathways. [4], [30] The federal quality ecosystem also faces a widely cited goal of transitioning to all digital measures by 2030. [31] A pragmatic milestones framework:

·       Near term (through 2027): “Make patient-centered data routine.”

·       ePRO adoption reaches operational reliability in participating oncology practices (enrollment, completion, triage, documented responses). [5–6]

·       HRSN screening and referral workflows reach stable capture with stratified reporting. [4–6]

  • Core oncology structured data capture expands using EOM CDE-like elements and mCODE patterns. [15–16]
  • Mid term (2028–2029): “Make measures digitally computable and aligned.”

·       Multi-payer pilots compute a core VBCC measure set as dQMs using claims + FHIR + ePRO feeds. [10–12]

  • Governance matures: common measure specs, semantic validation, audit pathways, and versioning. [10], [24]
  • Long term (2030): “Benchmark outcomes credibly.”

·       Outcome benchmarking includes more direct measures of symptom burden, function, and goal-concordant care, equity-stratified and risk-adjusted, with substantially reduced abstraction burden. [10–13], [15–16], [31]

timeline
  title Staged VBCC measurement timeline to 2030
  2026 : Establish "minimum viable VBCC" measure set
       : Expand ePRO + HRSN workflows in oncology episodes
  2027 : Standardize core oncology data elements (mCODE/USCDI+ Cancer alignment)
       : Routine equity stratification for core measures
  2028 : Multi-source dQM pilots (claims + FHIR + ePRO)
       : Governance: semantic validation + audit models
  2029 : Cross-payer alignment (Universal Foundation-style streamlining)
       : Reduced manual abstraction through automation
  2030 : EOM ends; mature digital measurement ecosystem
       : Credible benchmarking of patient-centered outcomes

Springboard technologies and policies, plus governance needs

Digital quality measures and alignment infrastructure

CMS defines dQMs as quality measures using standardized digital data captured and exchanged via interoperable systems, with standards-based specifications/code packages, computable in an integrated environment. [10] Meaningful Measures 2.0 explicitly emphasizes simplifying PRO-PMs and embedding them into EHR workflow via APIs and use of standardized tools (including NIH PROMIS instruments). [11] The Universal Foundation is designed to streamline high-priority measures across CMS programs—addressing the “too many unaligned measures” problem. [12]

Interoperability rails: ONC’s Cures Act Final Rule, USCDI+ Cancer, and mCODE

ONC’s Cures Act Final Rule supports secure access/exchange/use of EHI, calls for standardized APIs, and targets information blocking—critical prerequisites for multi-source measurement and patient-centered data flows. [28–29]

USCDI+ Cancer is collaboratively managed by NCI and ONC with multi-agency input, aiming to define real-world data elements to support prevention, diagnosis, treatment, research, and care, with explicit use cases. [13]

HL7’s mCODE is a core structured oncology dataset intended to increase interoperability; an early peer-reviewed overview describes its organization into domains (patient, lab/vitals, disease, genomics, treatment, outcome). [15], [32] CMS’s EOM CDE guide demonstrates an applied approach: CDEs can be reported via templates or via HL7 FHIR API mapped to mCODE elements—illustrating a bridge from low-tech reporting to high-tech computability. [16]

ePRO operationalization in EOM

CMS provides stepwise guidance for ePRO implementation in EOM and encourages valid/reliable instruments suitable for diverse populations; EOM requires increasing uptake over time and expects integration into information system workflow with EMR visualization and eligibility identification. [6] This is the most concrete federal lever currently driving routine capture of patient-reported symptom and function domains in oncology episodes. [4–6]

AI-assisted extraction: promise and constraints

AI/NLP can reduce abstraction burden by extracting stage, biomarkers, progression/recurrence events, and toxicity from unstructured notes—but only if it is validated, monitored for bias, and anchored to standardized data definitions. Evidence from mCODE implementation pilots suggests promise but also highlights limitations of current FHIR APIs and structured data availability for complex oncology analysis—implying AI will be needed as a bridge while structured capture matures. [33]

Governance and validation requirements

Digitizing measures can digitize errors if governance lags. Minimum governance requirements:

·       Specification governance: open, versioned specs; code sets; change control; alignment across payers. [10–12]

·       Semantic validation and testing: eCQI defines semantic validation as comparing formal criteria to manual computation from the same test database—still essential as measures go digital. [24]

·       Equity safeguards: stratification requirements, fairness monitoring, and avoidance of perverse incentives that penalize providers serving higher-risk populations. [4–6], [13]

flowchart LR
  subgraph Data_Sources[Data sources]
    EHR[EHR clinical data\n(stage, markers, meds)]
    PRO[ePRO platform\n(symptoms, function, distress)]
    Claims[Claims\n(utilization, cost, hospice)]
    Registry[Cancer registries\n(dx, stage, survival)]
    SDOH[HRSN/Community resource data\n(screening, referrals)]
  end

  subgraph Interop[Interoperability + standards]
    FHIR[FHIR APIs]
    mCODE[mCODE profiles]
    USCDI[USCDI+ Cancer elements]
  end

  subgraph Measure_Engine[Measure computation]
    dQM[dQM specifications\n(CQL/code packages)]
    Risk[Risk adjustment + stratification\n(case-mix, equity)]
    Audit[Validation + audit\n(semantic testing)]
  end

  subgraph Use_Cases[Use cases]
    CQI[Practice CQI + workflow triggers]
    Payment[Payment incentives + benchmarking]
    Public[Transparency/reporting\n(patient-facing summaries)]
    Research[Learning system + research]
  end

  EHR --> FHIR --> dQM
  PRO --> FHIR --> dQM
  Claims --> dQM
  Registry --> dQM
  SDOH --> FHIR --> dQM
  mCODE --> FHIR
  USCDI --> FHIR
  dQM --> Risk --> Use_Cases
  dQM --> Audit --> Use_Cases
  Risk --> CQI
  Risk --> Payment
  Risk --> Public
  Risk --> Research

Pragmatic staged roadmap and panel planning aids

Staged roadmap with win-conditions and stakeholders

Stage A (now–2027): Minimum viable VBCC measurement set becomes operational.
Win-conditions: (1) ePRO completion and triage response is reliable; (2) HRSN screening/referrals are captured; (3) core oncology facts are structured enough to compute at least two clinical-contextual measures (timeliness; regimen appropriateness). [4–6], [16]
Key stakeholders: oncology practices (especially community), EHR vendors, ePRO vendors, CMS/CMMI model teams.

Stage B (2028–2029): Digital computability and cross-payer alignment.
Win-conditions: (1) core VBCC measures are computed as dQMs from multi-source data feeds; (2) measure specs are aligned for at least a “core 8–12” across multiple payers; (3) semantic validation and audits are routine; (4) equity stratification is standard. [10–12], [13], [24]
Key stakeholders: CMS + commercial payers, NCQA/measure developers, ONC/ASTP, NCI/USCDI+ Cancer, HL7.

Stage C (2030): Credible outcome benchmarking with lower burden.
Win-conditions: (1) validated benchmarking on symptom burden/function and EOL quality is feasible; (2) manual abstraction is the exception, not the norm; (3) improvement cycles show measurable gains. The EOM endpoint (June 2030) is a natural forcing function to assess whether these win-conditions have been met. [4], [30–31]

Provocative decision-point questions for the panel

1.       What is the “minimum viable VBCC” measure set (8–12 measures) we will commit to—and which legacy measures should we retire? [11–12]

2.       Should ePRO-based measures be “process-accountable” (completion and response) first, then “outcome-accountable” (improved symptom burden/function) later—or should we jump directly to outcome accountability? [6–9]

3.       Which oncology clinical facts must be standardized first (stage, key biomarkers, line of therapy, recurrence), and who will pay for the workflow change—the payer, the EHR vendor, or the practice? [13–16]

4.       How do we prevent pathway concordance measures from becoming proprietary “black boxes” and exception-driven gaming? [25], [10–12]

5.       What is the right fairness model: do we adjust for social risk, stratify without adjustment, or use both with guardrails—and how do we avoid penalizing safety-net practices? [4–6], [13]

6.       Is timeliness a VBCC quality signal, an access signal, or both—and what data standard is required so it’s not just an EHR timestamp artifact? [13–16]

7.       What should be the “audit trigger” set for gaming or selection (e.g., abrupt shifts in exception rates or patient mix), and who should run audits? [24]

8.       If CMS/NCQA are driving a 2030 digital measurement horizon, what must happen by 2027 to avoid a ‘digital facade’ that is computable but not meaningful? [10–12], [31]

Suggested panelist types

A high-yield panel typically needs: a CMS/CMMI model leader (OCM→EOM lessons), a community oncology practice leader implementing ePRO + navigation, an ONC/ASTP interoperability or USCDI+ Cancer representative, an EHR vendor or FHIR/mCODE implementer, a payer quality lead familiar with dQMs and contracting, a measurement scientist (NCQA/NQF), and a patient advocate focused on symptoms/function/financial toxicity.

Evidence-backed talking points

·       OCM demonstrates the “care redesign without measurable outcome gain” dilemma: substantial practice effort did not translate to significant improvements on accountable quality measures versus comparison groups. [2–3]

·       EOM is a policy pivot toward patient-centered measurement: required redesign activities explicitly include ePRO collection/monitoring, HRSN screening, CEHRT, and CQI data use. [4–6]

·       ePRO symptom monitoring has unusually strong clinical evidence for a measurement domain: multiple trials show improvements in symptom burden, quality of life, acute care utilization, and in some studies survival—supporting symptom/function measurement as a VBCC cornerstone. [7–9]

·       Digital measurement is not optional; it is the burden-reduction strategy: CMS’s dQM definition and Meaningful Measures 2.0 explicitly frame digital, interoperable data and workflow-embedded PRO-PMs as the path away from manual abstraction. [10–11], [24]

·       Oncology needs a common data substrate: USCDI+ Cancer and mCODE are the clearest pathway to standardizing core oncology facts necessary for risk adjustment and meaningful comparisons. [13–16], [32]

·       Equity must be built into measurement design, not appended: EOM requires HRSN screening and equity-oriented activities; without stratification and fairness guardrails, VBCC incentives risk deepening disparities. [4–6], [13]

·       Financial toxicity is a real outcome domain with validated instruments: COST/FACIT‑COST is validated and can be operationalized as a VBCC measure paired with navigation response and stratification. [18]

Numbered bibliography

1.       Porter ME. What Is Value in Health Care? (journal article). N Engl J Med. 2010. [1]

2.       CMS / Abt Global. Evaluation of the Oncology Care Model: Final Report—Executive Summary (report, PDF). May 2024. [2]

3.       CMS / Abt Global. Oncology Care Model—Final Evaluation At-a-Glance (report, PDF). 2024. [3]

4.       CMS (CMMI). Enhancing Oncology Model (EOM) overview and model timeline (web page). Accessed 2026. [4]

5.       CMS. Update: Enhancing Oncology Model Factsheet (web page). Jun 27, 2023. [5]

6.       CMS. EOM Electronic Patient-Reported Outcomes Guide (report, PDF). Jun 2023. [6]

7.       Basch E, et al. Symptom Monitoring With Patient-Reported Outcomes During Routine Cancer Treatment: A Randomized Controlled Trial (journal article). J Clin Oncol. 2016. [7]

8.       Basch E, et al. Overall Survival Results of a Trial Assessing Patient-Reported Outcomes for Symptom Monitoring During Routine Cancer Treatment (journal article). JAMA. 2017. [8]

9.       Basch E, et al. Effect of Electronic Symptom Monitoring on Patient-Reported Outcomes Among Patients With Metastatic Cancer: A Randomized Clinical Trial (journal article). 2022. [9]

10.  CMS / eCQI Resource Center. Digital Quality Measures—About dQMs (CMS definition) (web page). 2026. [10]

11.  CMS. Meaningful Measures 2.0 (web page). Updated 2026. [11]

12.  CMS. The Universal Foundation (web page). 2025. [12]

13.  ONC / ASTP. USCDI+ (including USCDI+ Cancer program description) (web page). Dec 2023. [13]

14.  NCI (CBIIT). Real-World Data program and USCDI+ Cancer partnership (web page). Sep 2025. [14]

15.  HL7 International. mCODE (Minimal Common Oncology Data Elements) Implementation Guide (web page). Accessed 2026. [15]

16.  CMS. EOM Clinical Data Elements Guide (and mapping to HL7 FHIR API/mCODE) (report, PDF). Nov 2025. [16]

17.  ICHOM. Colorectal Cancer Standard Set and Reference Guide (web page + PDF; outcomes + case-mix variables) (guideline/resource). 2017. [17]

18.  de Souza JA, et al. Measuring financial toxicity as a clinically relevant patient-reported outcome: Validation of the COST measure (journal article). Cancer. 2017. [18]

19.  Value-Based Cancer Care. Metrics to Keep in Mind for Value-Based Cancer Care (web page). Aug 2024. [19]

20.  Jacobs DB, et al. Aligning Quality Measures across CMS—The Universal Foundation (journal article). N Engl J Med. 2023. [20]

21.  Gondi S, et al. Assessment of Patient Attribution to Care From Medical Oncologists (journal article). JAMA Network Open. 2021. [21]

22.  Kline RM, et al. Design Challenges of an Episode-Based Payment Model in Oncology (journal article). 2017. [22]

23.  Holmgren AJ, et al. National trends in oncology specialists’ EHR inbox work, 2019–2022 (journal article). J Natl Cancer Inst. 2025. [23]

24.  CMS / eCQI Resource Center. Semantic validation (definition and testing concept for eCQMs/dQMs) (web page). Updated 2025. [24]

25.  NCCN. Development and Update of Guidelines (evidence-based guideline process) (web page). Accessed 2026. [25]

26.  CMS. Oncology Care Model Fact Sheet (web page). 2016. [26]

27.  Balch A, et al. Patient perspectives on cost and quality measures in value-based cancer care models (journal article). 2026. [27]

28.  ONC / ASTP. ONC’s Cures Act Final Rule overview (web page). Accessed 2026. [28]

29.  Federal Register. 21st Century Cures Act: Interoperability, Information Blocking, and the ONC Health IT Certification Program (web page). May 1, 2020. [29]

30.  CMS. EOM Quality, Health Equity, and Clinical Data Strategy (timeline through 2030) (report, PDF). Aug 2024. [30]

31.  NCQA. Why Digital Quality (CMS goal of transitioning to all digital measures by 2030; Universal Foundation alignment) (web page). Accessed 2026. [31]

32.  Osterman TJ, et al. Improving Cancer Data Interoperability: The Promise of mCODE (journal article). 2020. [32]

33.  Li Y, et al. mCODE Genomics pilot / proof-of-concept and limitations (journal article). 2024. [33]


[1] https://www.nejm.org/doi/full/10.1056/NEJMp1011024

https://www.nejm.org/doi/full/10.1056/NEJMp1011024

[2] Evaluation of the Oncology Care Model: Final Report

https://www.cms.gov/priorities/innovation/data-and-reports/2024/ocm-final-eval-report-2024-exec-sum?utm_source=chatgpt.com

[3] Oncology Care Model (OCM) - Final Evaluation

https://www.cms.gov/priorities/innovation/data-and-reports/2024/ocm-final-eval-report-2024-aag?utm_source=chatgpt.com

[4] EOM (Enhancing Oncology Model)

https://www.cms.gov/priorities/innovation/innovation-models/eom?utm_source=chatgpt.com

[5] Update: Enhancing Oncology Model Factsheet

https://www.cms.gov/newsroom/fact-sheets/update-enhancing-oncology-model-factsheet?utm_source=chatgpt.com

[6] EOM Electronic Patient-Reported Outcomes Guide

https://www.cms.gov/priorities/innovation/media/document/eom-elec-pat-rpt-outcomes?utm_source=chatgpt.com

[7] Symptom Monitoring With Patient-Reported Outcomes During ...

https://pubmed.ncbi.nlm.nih.gov/26644527/?utm_source=chatgpt.com

[8] Overall Survival Results of a Trial Assessing Patient ...

https://pubmed.ncbi.nlm.nih.gov/28586821/?utm_source=chatgpt.com

[9] Effect of Electronic Symptom Monitoring on Patient-Reported ...

https://pmc.ncbi.nlm.nih.gov/articles/PMC9168923/?utm_source=chatgpt.com

[10] Digital Quality Measures - About dQMs | eCQI Resource Center

https://ecqi.healthit.gov/dqm?utm_source=chatgpt.com

[11] https://www.cms.gov/medicare/quality/cms-national-quality-strategy/meaningful-measures-2-0

https://www.cms.gov/medicare/quality/cms-national-quality-strategy/meaningful-measures-2-0

[12] https://www.cms.gov/medicare/quality/cms-national-quality-strategy/universal-foundation

https://www.cms.gov/medicare/quality/cms-national-quality-strategy/universal-foundation

[13] https://healthit.gov/standards-and-technology/uscdi-plus/

https://healthit.gov/standards-and-technology/uscdi-plus/

[14] https://www.cancer.gov/about-nci/organization/cbiit/projects/real-world-data

https://www.cancer.gov/about-nci/organization/cbiit/projects/real-world-data

[15] https://build.fhir.org/ig/HL7/fhir-mCODE-ig/

https://build.fhir.org/ig/HL7/fhir-mCODE-ig/

[16] https://www.cms.gov/priorities/innovation/media/document/eom-clinical-data-elements-guide

https://www.cms.gov/priorities/innovation/media/document/eom-clinical-data-elements-guide

[17] https://www.ichom.org/patient-centered-outcome-measure/colorectal-cancer/

https://www.ichom.org/patient-centered-outcome-measure/colorectal-cancer/

[18] https://pmc.ncbi.nlm.nih.gov/articles/PMC5298039/

https://pmc.ncbi.nlm.nih.gov/articles/PMC5298039/

[19] https://www.valuebasedcancer.com/issues/2024/august-2024-vol-15-no-1/metrics-to-keep-in-mind-for-value-based-cancer-care

https://www.valuebasedcancer.com/issues/2024/august-2024-vol-15-no-1/metrics-to-keep-in-mind-for-value-based-cancer-care

[20] https://www.nejm.org/doi/full/10.1056/NEJMp2215539

https://www.nejm.org/doi/full/10.1056/NEJMp2215539

[21] https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2779755

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2779755

[22] https://pmc.ncbi.nlm.nih.gov/articles/PMC5508445/

https://pmc.ncbi.nlm.nih.gov/articles/PMC5508445/

[23] https://academic.oup.com/jnci/article/117/6/1253/8051594

https://academic.oup.com/jnci/article/117/6/1253/8051594

[24] https://ecqi.healthit.gov/glossary/semantic-validation

https://ecqi.healthit.gov/glossary/semantic-validation

[25] https://www.nccn.org/guidelines/guidelines-process/development-and-update-of-guidelines

https://www.nccn.org/guidelines/guidelines-process/development-and-update-of-guidelines

[26] https://www.cms.gov/newsroom/fact-sheets/oncology-care-model

https://www.cms.gov/newsroom/fact-sheets/oncology-care-model

[27] https://pmc.ncbi.nlm.nih.gov/articles/PMC12967066/

https://pmc.ncbi.nlm.nih.gov/articles/PMC12967066/

[28] ONC's Cures Act Final Rule

https://healthit.gov/regulations/cures-act-final-rule/?utm_source=chatgpt.com

[29] 21st Century Cures Act: Interoperability, Information ...

https://www.federalregister.gov/documents/2020/05/01/2020-07419/21st-century-cures-act-interoperability-information-blocking-and-the-onc-health-it-certification?utm_source=chatgpt.com

[30] https://www.cms.gov/files/document/eom-qual-health-equity-clin-data-strat.pdf

https://www.cms.gov/files/document/eom-qual-health-equity-clin-data-strat.pdf

[31] https://www.ncqa.org/digital-quality-transition/why-digital-quality/

https://www.ncqa.org/digital-quality-transition/why-digital-quality/

[32] https://pmc.ncbi.nlm.nih.gov/articles/PMC7713551/

https://pmc.ncbi.nlm.nih.gov/articles/PMC7713551/

[33] https://pubmed.ncbi.nlm.nih.gov/38935887/

https://pubmed.ncbi.nlm.nih.gov/38935887/