Thursday, January 15, 2026

SIDEBAR: The Rise of Multi Channel Analyzers in Clinical Chemistry

 SIDEBAR to a broader article.


Sidebar to the Sidebar: When Did Multi-Channel Analyzers Become Common—and Why Do We Still See the Same Panels Today?

Multi-channel automated analyzers began to enter routine clinical laboratory use in the late 1970s and early 1980s, but they became truly widespread during the mid-to-late 1980s and early 1990s. This was the period when hospital and reference laboratories transitioned from largely manual or single-analyte chemistry testing to high-throughput platforms capable of measuring many analytes simultaneously from a single specimen. By the time OIG issued its guidance in 1997 and 1998, these analyzers were no longer experimental curiosities—they were the backbone of routine clinical chemistry.

The key innovation was not simply speed, but parallelization. Earlier analyzers performed one test at a time. Multi-channel analyzers, by contrast, were designed to run multiple assays together because it was operationally easier, cheaper, and analytically cleaner to do so. Once serum was loaded and the instrument calibrated, adding additional analytes often required little incremental effort. This technological reality collided directly with a reimbursement system that still conceptualized testing as a series of individually ordered services.

Out of this collision emerged the chemistry panels that are now so familiar they barely register as “panels” at all. The Basic Metabolic Panel (BMP)—typically including sodium, potassium, chloride, bicarbonate, BUN, creatinine, glucose, and sometimes calcium—reflects a core set of electrolytes and renal markers that were commonly run together on early automated systems. The Comprehensive Metabolic Panel (CMP) expanded this set to include liver-associated analytes such as AST, ALT, alkaline phosphatase, bilirubin, albumin, and total protein, yielding the familiar 14-analyte version used today. Variants with 20 or 21 analytes emerged historically as laboratories added calcium, phosphorus, or other markers depending on platform design and clinical convention.

What is often forgotten is that these panels were not originally “bundles” in a conceptual sense. They were engineering artifacts—groupings that made sense given how the machines worked. Physicians quickly adapted their ordering habits to these groupings because they were clinically useful and readily available. Over time, CPT coding and Medicare policy caught up, formalizing what had started as a technological convenience into standardized billable panels.

By the mid-1990s, however, policymakers faced a new question: what happens when technology makes it easier to run more tests than are clinically necessary for a given patient? OIG’s answer, reflected in the 1997 and 1998 guidance, was not to resist the technology, but to regulate the billing behavior around it. Laboratories could run broad panels if that was how their analyzers worked—but they could not bill indiscriminately.

Seen in this light, modern molecular panels are not a radical departure from historical practice. They are the genomic descendants of the chemistry analyzer. Just as it once made sense to run electrolytes and liver enzymes together, it may now make sense to sequence genes A, B, and C together. The regulatory question has remained strikingly stable for forty years: what matters is not what the machine runs, but what the laboratory reports and bills as medically necessary for each patient.

That continuity is precisely why the 1997–1998 OIG guidance still resonates today.

SIDEBAR: The OIG Laboratory Compliance Guides - 1997, 1998

SIDEBAR to a broader article.

[AI generated text]


  • OIG (1997) Model Laboratory Compliance.   62 Fed Reg 9435 (7pp), 3/3/1997. Here.
  • OIG (1998) Laboratory Compliance Update.  63 Fed Reg 45076 (12pp), 82/24/1998.   Here

###

Sidebar: The 1997–1998 OIG Laboratory Guidance — Why It Was Written, What It Said, and Why It Still Matters

When the Office of Inspector General issued its clinical laboratory compliance guidance in 1997 and then again in 1998, it was responding to a technology shift that now feels quaint but was genuinely disruptive at the time: the widespread adoption of automated, multi-channel analyzers. These platforms could run many chemistry tests simultaneously, often more efficiently than running a single analyte alone. The policy problem was immediate and practical. Medicare payment rules were built around individually ordered tests, but laboratory technology was increasingly optimized around bundled or parallel testing.

OIG recognized that it could not regulate laboratories by pretending this technology did not exist. Instead, the agency focused on what it viewed as the real compliance risk: billing and utilization behavior, not the internal mechanics of the analyzer. Across both guidances, OIG made clear that laboratories were not prohibited from running multiple tests together, even when only a subset was ordered for a particular patient. What mattered was that only those tests that were ordered, medically necessary, performed, and reported were billed to Medicare

This principle runs throughout both documents. OIG repeatedly emphasized that laboratories do not practice medicine and do not decide medical necessity, but they are nonetheless responsible for ensuring that claims submitted to Medicare reflect only services that meet Medicare’s coverage standards. Laboratories were expected to design requisitions, billing systems, and utilization monitoring processes that discouraged overordering and prevented the automatic billing of unnecessary test components. Running a “superset” of tests was not itself problematic; billing indiscriminately for that superset was.

Seen from today’s vantage point, this guidance maps cleanly onto modern molecular workflows. A laboratory may find it efficient to run genes A, B, and C together because they are ordered together most of the time. The OIG’s position—then and now—is not that this efficiency must be undone, but that the laboratory must ensure that gene B or C is not reported or billed when it is not ordered or medically necessary. That is the core continuity between the era of chemistry analyzers and the era of sequencing panels.


What the Two Documents Share

Taken together, the 1997 and 1998 guidance articulate a unified regulatory stance. First, physicians may order any tests they believe are appropriate, but Medicare will only pay for tests that are covered, reasonable, and necessary. Second, laboratories are expected to structure their operations to support conscious ordering, not passive or default overutilization. Third, laboratories must have internal controls—requisition design, notices to physicians, utilization monitoring, and billing safeguards—to prevent automated technology from driving inappropriate claims. And fourth, OIG made explicit that laboratories could face False Claims Act or kickback exposure if their pricing, marketing, or billing practices encouraged unnecessary testing

Importantly, neither document treats automation as inherently suspect. The tone is pragmatic, even permissive. OIG’s concern is not how many assays an analyzer can run, but whether the laboratory’s business practices undermine Medicare’s medical-necessity framework.


What Is Distinctive About the 1997 Guidance

The 1997 guidance is best understood as a first-generation response to automated testing. It is closely focused on multichannel chemistry panels, CPT-defined profiles, and requisition design. OIG spends considerable time on the mechanics of ordering—encouraging laboratories to design requisitions that force physicians to actively select tests, document diagnoses, and understand the billing consequences of profiles and customized panels. The document also introduces detailed recommendations around physician notices, acknowledgments for customized profiles, and internal monitoring of utilization growth, including the now-famous suggestion to investigate test volume increases exceeding 10 percent year over year

In short, the 1997 guidance is granular and operational. It reads as OIG’s attempt to translate a new technological reality into concrete compliance behaviors for laboratories that were still learning how automation intersected with Medicare rules.


What Is Distinctive About the 1998 Guidance

The 1998 guidance does not reverse the 1997 conclusions; instead, it refines and institutionalizes them. By 1998, OIG had issued compliance program guidance for hospitals and home health agencies, and it sought to align laboratory guidance with a broader, cross-industry compliance framework. The 1998 document is therefore more explicitly structured around the seven elements of an effective compliance program, drawing on the Federal Sentencing Guidelines and corporate integrity agreement experience.

Substantively, the 1998 guidance updates the discussion to reflect policy changes, including the elimination of certain multichannel chemistry billing codes from CPT, and it places greater emphasis on standing orders, reflex testing, and confirmation testing. OIG is more explicit about discouraging standing orders as a default practice and stresses that even where standing orders exist, laboratories must periodically revalidate their medical necessity

 The tone is less about teaching laboratories how automation works and more about embedding laboratory billing into a mature compliance culture.


Why This Still Matters

Read together, the 1997 and 1998 OIG guidances show an agency grappling—deliberately and incrementally—with the consequences of automation. They do not prohibit efficient testing platforms. They do not require laboratories to cripple their workflows. Instead, they insist on a separation between what technology can do and what Medicare will pay for. That distinction remains foundational today, whether the platform in question is a chemistry analyzer, a sequencing pipeline, or a digital diagnostic workflow.

Understanding this history helps explain why modern debates about molecular panels and reflex testing feel familiar to regulators. The questions are old. Only the machines have changed.

Wednesday, January 7, 2026

Myriad PathomIQ Coding

 Myriad and PathomIQ;


An Entry Point to Potentially Disastrous AMA CPT Changes


 


The evolution of the Myriad–PATHOMIQ relationship offers a compact case study in how cutting-edge computational pathology is colliding with a reimbursement and coding system that was never designed to recognize information extraction as a medical service. At its scientific core, PATHOMIQ represents a disciplined and unusually mature attempt to answer a legitimate clinical question: whether routinely generated histopathology images—H&E slides that already exist in every prostate cancer case—contain latent, reproducible signals that can predict recurrence, metastasis, or treatment response. From a research standpoint, the answer increasingly appears to be yes. From a business standpoint, however, the answer has become far more ambiguous, not because the science failed, but because the payment architecture has no place to put it.


PATHOMIQ’s early development reflects a rare alignment of translational oncology, pathology, molecular biology, and enterprise AI. The company did not emerge from a single academic lab nor from a purely technical AI incubator; it was constructed at the intersection of both. Its founders include senior academic pathologists and oncologists with access to long-follow-up tissue cohorts and clinically meaningful endpoints, paired with a CEO whose background signals commercial intent from the outset. That combination shaped PATHOMIQ into what might be called a “model-as-asset” company: not a workflow vendor, not a CLIA service lab, but a generator of validated prognostic models intended to be licensed into larger diagnostic ecosystems. Scientifically, this strategy made sense. Commercially, it assumed that once clinical validity was established, a payment pathway would eventually follow. That assumption is now proving increasingly fragile.


The Myriad licensing deal in early 2025 initially appeared to confirm the viability of this approach. Myriad’s first descriptions of the collaboration framed PATHOMIQ_PRAD as a distinct, AI-driven prostate cancer test—fast, image-based, independent, and in some cohorts outperforming traditional markers. The language suggested optionality: PATHOMIQ could stand beside genomic tests like Prolaris, offering orthogonal insight derived from morphology rather than RNA expression. This framing implicitly tested a bold proposition: that algorithmic interpretation of existing clinical data might itself be treated as a billable diagnostic service. In early 2025, Myriad sounded as though it was prepared to see whether the market—and perhaps the CPT system—might accept that proposition.


By late 2025, that tone had changed markedly. Myriad’s public communications no longer described PATHOMIQ as a standalone test but as an embedded analytic layer inside Prolaris. AI was reframed as an “amplifier,” not an answer; as an enhancement, not a replacement; as supportive of clinician judgment rather than an independent decision tool. This rhetorical pivot was not subtle to those who understand reimbursement mechanics. It reflects a recognition that the standalone “upload DICOM, download report” business model—however elegant scientifically—is structurally homeless in the current CPT ecosystem. What began as a potential new diagnostic modality was quietly absorbed into an already covered one.


This shift exposes the deeper problem: computational pathology sits at the intersection of two Venn diagrams that no longer overlap. On one side is science and product development. PATHOMIQ and similar companies are extracting clinically relevant information from data that medicine already collects, at marginal cost and with increasing rigor. They are not inventing new biomarkers in the classical sense; they are revealing structure in existing data. On the other side is the CPT framework, which remains stubbornly biomarker-centric. PLA codes reward the creation of new analytes. Category I codes reward established professional services. MAAA codes reward multianalyte laboratory tests. None of these categories recognize information itself as the billable output when no new wet-lab signal is produced.


The result is a slow-motion train wreck. Computational pathology tests that do not create new molecules are systematically excluded from PLA eligibility. They are incompatible with Category I because they are proprietary, versioned, and algorithmic, and because CPT resists paying twice for “interpretation” of the same specimen. MAAA is explicitly unavailable because images are not analytes. Category III offers, at best, a tracking code with little prospect of payment. CMAA—if it emerges as currently envisioned—risks becoming a registry for AI tools rather than a reimbursement pathway, a kind of Appendix S for algorithms: visible, cataloged, and unpaid.


What makes this particularly reckless from a policy standpoint is that the CPT system is not merely failing to keep up with innovation; it is actively shaping commercial behavior in ways that suppress it. Myriad’s absorption of PATHOMIQ into Prolaris is rational and probably unavoidable, but it is also telling. The AI does not disappear because it lacks value; it disappears because it cannot survive on its own economically. The cost of innovation is shifted from revenue line to internal expense. The signal is preserved, but the incentive to build similar standalone tools is weakened. Smaller companies without a Myriad-scale partner are left with no viable path at all.


The danger here is not that CPT is conservative—that has always been true—but that it is enforcing an outdated ontology of diagnostics at precisely the moment medicine is becoming information-centric. Computational pathology challenges the assumption that value arises only when a new biomolecule is measured. It argues instead that value can arise when existing data are interpreted more deeply. By rejecting that premise and diverting these services into non-payable categories, the AMA risks creating a perverse outcome: encouraging molecular redundancy while discouraging informational efficiency.


Seen in this light, the Myriad–PATHOMIQ story is less about one partnership than about a systemic mismatch. The science is converging on morphology-derived insight as a legitimate predictor of outcome. The business models are converging on licensing and integration as survival strategies. Meanwhile, CPT is converging on exclusion—pushing these technologies out of PLA, away from Category I, and into administrative limbo. If this trajectory continues, the U.S. risks building a diagnostics ecosystem that pays generously for new molecules while quietly strangling tools that extract more value from what we already have. That is not just a coding problem; it is a policy failure with long-term consequences for how innovation in pathology is financed, scaled, and ultimately delivered to patients.


 


 


##


Below is a single, continuous essay-style report, written independently and analytically, designed to give you language, framing, and “color” you can mine for a white paper that is openly critical of AMA CPT policy direction. I am deliberately not footnoting or citing documents; this is an interpretive synthesis, not a compliance memo.


 


ESSAY PART TWO: WHERE IS AMA TAKING US?

The current AMA CPT posture toward artificial intelligence is not malicious, but it is conceptually misaligned with where large parts of diagnostic medicine are actually headed. The principles now being articulated—particularly the resistance to autonomous AI services, the insistence on physician mediation, and the fear of RVU dilution—make internal sense in the world of adjunctive AI in radiology, but they break down badly when applied to computational pathology and genomics, where AI is not advisory, not suggestive, and not subordinate to human interpretation. In those domains, AI is the test.


Radiology AI, which appears to be the tacit reference model for many CPT deliberations, is structurally different. Most radiology AI tools flag regions of interest, highlight subtle findings, prioritize worklists, or suggest second looks. They operate as overlays on a physician’s interpretive act. The radiologist remains the locus of judgment, liability, and narrative authority. From a CPT perspective, it is therefore natural—almost inevitable—to treat these tools as adjunctive, non-billable, or bundled services. They enhance physician work; they do not replace it. In that context, concerns about double payment, RVU inflation, and erosion of physician control are at least internally coherent.


Computational pathology and genomics do not fit that model at all. In these fields, the “result” is not a suggestion to a physician; it is a probabilistic or categorical determination derived from data at a scale no human can meaningfully process. When a genomic assay analyzes hundreds of millions or billions of DNA fragments and reports a pathogenic mutation linked to an FDA-approved therapy, there is no meaningful point at which a pathologist’s opinion is injected into the result. The interpretive act has already occurred—algorithmically, deterministically, and reproducibly—before the report is generated. The physician’s role is clinical integration, not analytical interpretation. Treating this process as if it were analogous to a radiologist glancing at a highlighted corner of a mammogram fundamentally misunderstands the epistemology of the test.


Computational pathology occupies the same conceptual space. When an AI model analyzes a digitized H&E slide and produces a validated risk score for recurrence or metastasis, the output is not a “second opinion” on what a pathologist already saw. It is a new informational product derived from high-dimensional pattern recognition across thousands or millions of features. No human pathologist could reproduce it, even in principle. The insistence that such outputs must be framed as physician-mediated services, or else relegated to non-payable registries, reflects an outdated belief that diagnostic legitimacy flows only through human visual interpretation. That belief is already obsolete in genomics, and it is rapidly becoming obsolete in digital pathology.


The AMA’s resistance is further complicated by a category error about money. Much of the anxiety around AI within CPT appears rooted in the physician fee schedule: fears that new AI services will siphon dollars from E/M and procedural services, accelerate RVU proliferation, and force further conversion-factor deflation. Those are real concerns inside the physician payment system. But they are largely irrelevant to computational pathology and genomics, which live on the Clinical Laboratory Fee Schedule, not the RVU-based PFS. These tests do not compete with physician work RVUs; they compete with other diagnostics. Blocking them does nothing to protect physician income, but it does distort innovation in laboratory medicine.


In effect, CPT is using governance principles designed to prevent AI from unbundling physician interpretation in radiology and applying them wholesale to domains where interpretation has already been disintermediated by data scale. The result is not thoughtful containment but structural exclusion. By rejecting these services from PLA eligibility, declaring them incompatible with Category I, and shunting them toward Category III, Appendix-like registries, or future CMAA constructs that lack payment authority, CPT is signaling that algorithmic information extraction from existing clinical material is not a legitimate diagnostic act. That position is increasingly untenable.


The Myriad–PATHOMIQ evolution illustrates the consequences. Initially, PATHOMIQ was framed as a standalone AI diagnostic test, reflecting confidence in its scientific validity. Within months, it was repositioned as an embedded, invisible enhancement inside an already covered genomic assay. This was not a scientific retreat; it was a reimbursement survival maneuver. The AI did not fail. The coding system did. Smaller companies without a Myriad-scale sponsor will not be able to make this pivot, and many will simply disappear or remain trapped in perpetual pilot status.


The deeper risk is that CPT policy is unintentionally privileging molecular redundancy over informational efficiency. If payment is available only when a new analyte is measured, innovators will be driven to create ever more complex wet-lab assays—even when equivalent or superior information could be extracted computationally from data we already generate. That is not cost containment; it is technological distortion. It rewards chemistry over computation, not because chemistry is better, but because the coding ontology cannot see computation as a test.


In short, AMA CPT is not trying to cause a disaster, but it is governing the future with the wrong mental model. AI in radiology is not the template for AI in pathology or genomics. Advisory tools are not autonomous diagnostics. RVU anxiety is not relevant to CLFS innovation. Until CPT explicitly recognizes that some AI systems are the diagnostic act—rather than aids to it—the collision between policy and science will intensify. And when that collision becomes visible to payers, industry, and Congress, it will not be interpreted as prudent stewardship. It will be seen as an institutional failure to adapt to how medicine now produces knowledge.


# #

Myriad February to October 2025

# #

ARe there any interesting differences between Myriad's description of its pATHomIQ collaboration initially (February, march 2025) and recently (October-November 2025)?




Yes — there are meaningful and revealing shifts between Myriad’s initial description (Feb–Mar 2025) of the PATHOMIQ collaboration and the later framing (Oct–Nov 2025). They are subtle in wording but significant in strategic intent, especially when viewed through a reimbursement, coding, and commercialization lens.

Below is a structured comparison, grounded directly in the documents you uploaded.

________________________________________

1. February–March 2025: PATHOMIQ as a standalone AI test

In the initial press release (Feb 24–25, 2025) and contemporaneous coverage (e.g., Urology Times), Myriad presents PATHOMIQ_PRAD very much as its own clinical test, albeit one licensed and commercialized by Myriad.

Key characteristics of the early framing:

a) Explicit “AI-driven clinical test” language

The press release repeatedly states Myriad intends to “commercially launch its first AI-driven prostate cancer clinical test later this year.” The implication is a distinct test offering, parallel to Prolaris, not merely an internal analytic component 

2025 Myriad 0225 PathomIQ PRAD …

.

b) Emphasis on speed, independence, and outperforming markers

PATHOMIQ_PRAD is described as:

Analyzing digitized H&E whole-slide images

Returning results within 1 day

Outperforming Gleason, TNM, and genomic tools in some cohorts

This language positions PATHOMIQ_PRAD as a competitive prognostic modality, not just a complementary add-on 

2025 Urol Times 0301 Myriad and…

.

c) Strong hints at future platform expansion

The early materials stress that although the current model is post-prostatectomy, the foundational model could apply broadly across solid tumors. That reads like a platform commercialization vision, not merely a prostate-specific enhancement 

2025 Myriad 0225 PathomIQ PRAD …

.

Interpretation:

At launch, Myriad appears to be testing the waters for whether PATHOMIQ could stand as a billable, brand-recognizable AI test—even if bundled operationally with Myriad infrastructure.

________________________________________

2. October–November 2025: PATHOMIQ as a component inside Prolaris

By October 2025, especially in Myriad’s own blog post (“Enhancing Prostate Cancer Genomics with AI-Powered Morphometric Analysis”), the framing shifts decisively.

a) PATHOMIQ is no longer framed as a standalone test

Nowhere in the October 2025 blog does Myriad describe PATHOMIQ_PRAD as a separate, orderable test. Instead, the language is:

“Soon, the Prolaris Test will integrate AI from PATHOMIQ…” 

2025 Myriad 1023 Progress with …

That is a major rhetorical downgrade from “launching our first AI-driven test.”

b) AI is repositioned as an “amplifier,” not an answer

Repeated phrases include:

“AI plus genomics”

“Not a replacement; it’s an upgrade”

“A precision amplifier”

This is a clear de-risking move. PATHOMIQ is now framed as:

Supporting clinician judgment

Enhancing an already-covered genomic assay

Avoiding claims of independence or displacement 

2025 Myriad 1023 Progress with …

.

c) Independence is explicitly walked back

The October post even critiques tools that market themselves as “independent decision aids”, arguing instead for consolidated, integrated reports that already incorporate CAPRA, clinicopathology, and genomics.

That passage reads, frankly, like a quiet repudiation of how PATHOMIQ_PRAD was initially positioned earlier in 2025 

2025 Myriad 1023 Progress with …

.

Interpretation:

By fall 2025, Myriad appears to have concluded that selling PATHOMIQ as a standalone AI test is commercially and policy-risky, and that its highest value is as an embedded analytic layer inside Prolaris.

________________________________________

3. What changed — and why it matters

From a reimbursement perspective

The shift aligns almost perfectly with the CPT realities you outlined earlier:

A standalone “upload DICOM → download report” test has no clean CPT home

PLA is unavailable

Cat I is structurally hostile

Cat III is weak and unattractive

CMAA is uncertain and likely non-paying

Embedding PATHOMIQ inside Prolaris allows Myriad to:

Avoid new CPT battles

Preserve existing coverage

Treat AI costs as internal COGS, not billable services

From a risk-management perspective

The early 2025 language was ambitious and externally facing (investors, urologists, media).

The late 2025 language is defensive, integrative, and payer-aware.

This is classic Myriad behavior:

innovate aggressively, then stabilize within reimbursement reality.

________________________________________

4. Bottom line

Yes — the evolution is real and important.

Early 2025: PATHOMIQ_PRAD is framed as a new AI clinical test that could stand alongside or even outperform existing tools.

Late 2025: PATHOMIQ becomes an embedded morphometric layer inside Prolaris, explicitly positioned as complementary, supportive, and non-disruptive.

That shift strongly suggests Myriad learned — quickly — that AI-only morphology products face structural barriers in CPT and payer systems, and that the winning move is absorption, not exposure.

If you want, next we could:

Map this transition onto specific CPT committee anxieties, or

Compare this arc to other AI pathology companies that failed to make the same pivot.