What appears to have gone wrong at MolDx: a management-evolution view
MolDx began as a necessary improvisation. In the early 2010s, Medicare was facing a molecular diagnostics problem that the legacy CPT and claims systems were not built to solve. One CPT code could hide many different tests. “Stacked” molecular codes did not identify the actual clinical service. Tumor panels, germline panels, expression classifiers, and later MRD assays were multiplying faster than Medicare’s normal coverage and coding apparatus could classify them. Public descriptions of the program still define its core mission as three linked functions: test identification, application review, and coverage/reimbursement determination. (Palmetto GBA)
That was the original brilliance of MolDx: it recognized that Medicare could not make rational payment decisions unless it first knew what test was actually being billed. The Z-code apparatus solved a real problem. It created test-specific identity in a world where CPT codes were often too broad, too old, or too generic. Other MACs later adopted the MolDx framework, and public MAC descriptions say the program identifies molecular diagnostic tests, establishes coverage and reimbursement, sets clinical utility expectations, completes technical assessments, and tracks utilization. (Medicare)
But the same feature that made MolDx innovative also created its long-term vulnerability. It was not merely a coding registry, not merely a coverage program, not merely a utilization-control program, and not merely a scientific review body. It became all of these at once. That is a hard hybrid to govern. It sits somewhere between a MAC, a payer technology-assessment committee, a claims-editing vendor, a scientific review board, and a quasi-national Medicare molecular diagnostics agency. Yet it does not have the transparent statutory architecture, appellate structure, staffing norms, performance metrics, or public accountability one would expect from a true national agency.
The result is an institution that appears to have outgrown its founding model.
1. The founder model worked when the problem was smaller
The early MolDx model seems to have depended heavily on a small number of unusually knowledgeable individuals. Dr. Elaine Jeter was publicly identified as the medical director of MolDx in the program’s formative period, and contemporaneous trade coverage described her as seeing the need to define what Medicare was actually paying for in molecular testing. (myadlm.org)
That kind of founder-led model can be extremely effective at birth. It allows rapid judgment, internal consistency, and a coherent philosophy. In 2012–2014, the molecular testing world was chaotic enough that even a rough but expert-controlled clearinghouse was better than the alternative. If the question was, “Is this a real test, what does it measure, what clinical use is claimed, and is there enough utility evidence to pay for it?” a small expert group could often answer.
But founder models usually fail at scale unless they are converted into institutional models. The founder knows the implicit rules. The founder remembers prior decisions. The founder can say, “This one is like that one, but different in these two ways.” The founder can make a judgment call without a written operating manual. That is efficient until volume rises, technologies diversify, and stakeholders begin to demand reproducibility.
At some point, tacit expertise has to become process architecture: clear evidentiary templates, review timelines, escalation pathways, reviewer calibration, written precedent, and public-facing decision logic. If that conversion does not happen, expansion simply produces more reviewers, more meetings, more files, and more elaborate controls—but not necessarily more throughput.
2. MolDx appears to have scaled staffing without fully scaling decisional architecture
The modern MolDx program is no longer a one-person intellectual shop. Dr. Gabriel Bien-Willner is publicly described as MolDx medical director and Palmetto GBA chief medical officer, and the program is described in industry forums as setting policy for affiliated MACs over a broad multistate footprint. (Trapelo)
That expansion should, in theory, have solved the bottleneck problem. More MDs and PhDs should mean more parallel review, more specialization, and more predictable cycle times. But stakeholder complaints suggest the opposite: the system may have gained staff without gaining decisional finality.
That is a classic management failure mode. Organizations often respond to backlog by adding expert reviewers. But if the underlying process has no clear standard for “enough evidence,” no binding decision clock, no structured endpoint, and no authority to say “approve, deny, or publish a noncoverage rationale,” the additional reviewers may increase review complexity rather than reduce it. Each reviewer can generate more questions. Each round can expose another uncertainty. Each uncertainty can justify another cycle.
The technical assessment process, by its nature, invites this. MolDx says technical assessments are required for certain molecular LDTs to determine compliance with policy. (Palmetto GBA) That is reasonable. But a technical assessment can become either a disciplined coverage review or an endless epistemology seminar. The difference lies in governance.
A disciplined review says: here is the question; here is the evidence standard; here is the submitted evidence; here are the remaining deficiencies; here is the decision. An undisciplined review says: here are two pages of questions; thank you for the ten-page response; here are six more pages of questions; thank you again; here are additional concerns; the file remains under review.
That is not merely slow. It changes the nature of the program. It turns MolDx from a coverage gateway into a perpetual uncertainty machine.
3. Broad LCDs plus test-specific technical assessments create a hidden bottleneck
One of MolDx’s most important structural choices is the use of broad LCDs and articles, with test-specific coverage often operationalized through Z codes, coding articles, or technical-assessment completion. The DEX/Z-code system identifies individual tests, and public descriptions say DEX is used as a critical business process in the MolDx program. (dexzcodes.com)
This architecture has advantages. It avoids writing a new LCD for every test. It allows a policy to cover a category—say, molecular tests for a cancer indication—while using test-specific review to decide which products qualify. That is administratively elegant.
But it also creates a submerged coverage process. The public LCD may say a class of tests is covered when reasonable and necessary. The real decision, however, happens in the private or semi-private technical assessment file. If dozens of specific tests remain in review, the practical effect is noncoverage, even though the LCD itself may appear broad and enabling.
This creates a legitimacy problem. Medicare coverage is supposed to be governed by public rules, public LCDs, reconsideration mechanisms, comment periods, and appealable claim decisions. But MolDx’s test-specific operational decisions can become the actual gatekeeping layer. That may be legally and administratively defensible in many cases, but from the stakeholder perspective it feels like being trapped in an unpublished administrative maze.
The worst version is: the LCD is broad, the evidentiary standard is elastic, the test-specific review is opaque, and the timeline is indefinite. In that setting, no one can plan. Investors cannot forecast coverage. Labs cannot decide whether to keep funding clinical studies. Clinicians cannot know when a useful test might become available. Patients become invisible casualties of procedural delay.
4. The program may have confused control with management
Dr. Bien-Willner’s apparent emphasis on protocols, best practices, and management controls is not inherently a flaw. A program like MolDx needs controls. Molecular diagnostics is filled with real risks: overutilization, weak evidence, bad coding, misleading claims, duplicate tests, poorly validated LDTs, and aggressive billing. A serious Medicare contractor cannot simply wave tests through.
The problem is that controls are not the same thing as management.
Controls ask: Did the applicant submit the required materials? Are the right boxes checked? Has each concern been answered? Is every risk closed?
Management asks: What is the purpose of the process? What throughput is required? What decisions must be made this quarter? Which files are stuck and why? Which standards are ambiguous? Which reviewers disagree? Which applicants deserve a final yes/no? Which recurring questions should be converted into public guidance?
A control culture can feel rigorous while performing poorly. Indeed, the more delayed the process becomes, the more a control-oriented organization may respond by adding more controls. More templates. More rounds. More internal signoffs. More “alignment.” More precision in the paperwork. But the output that matters is not the elegance of the internal protocol. The output is timely, consistent, explainable Medicare coverage decisions.
If new LCDs are rare and existing LCD pathways are clogged by endless test-specific review, the system may be optimized for not making mistakes, rather than for making accountable decisions. That is a different mission.
5. Conference visibility can become a symptom of role confusion
It is not wrong for a MolDx leader to speak at industry, policy, healthcare, or investor conferences. In fact, given the program’s importance, some public explanation is useful. The molecular diagnostics industry needs to hear what Medicare expects. Public engagement can reduce misunderstanding.
But when stakeholders experience operational gridlock, heavy conference visibility takes on a different meaning. It begins to look like external thought leadership has outrun internal execution.
That may or may not be fair to any individual. The director may be working extremely hard. The conference appearances may be only a small fraction of total effort. But organizationally, the optics matter. If applicants are waiting through repeated 60-day cycles, receiving expanding question sets, and seeing no final decision, they will not be reassured by polished conference presentations about rigorous process. They will ask: who is running the shop?
The deeper issue is not travel or speeches. The deeper issue is whether MolDx has a functioning chief operating discipline separate from its scientific and policy voice. A complex program needs someone accountable for cycle time, queue management, reviewer assignment, applicant communication, decision closure, and backlog reduction. That is an operations role, not a podium role.
6. The Z-code apparatus succeeded commercially but may have distorted the public mission
The Z-code/DEX infrastructure solved the “what test is this?” problem. It also became a licensed product used beyond Medicare. Public descriptions characterize DEX as a diagnostics exchange and molecular diagnostic test identification/policy management solution, and Optum markets Z-code data/registry products as a way to connect payers and labs and bring clarity to MDx testing. (dexzcodes.com)
That success creates another institutional ambiguity. Is MolDx primarily a Medicare coverage program? A payer-control architecture? A test registry? A data asset? A policy engine? A commercialized infrastructure?
The answer may be “all of the above,” but that is exactly the governance problem. When a public Medicare function becomes intertwined with proprietary identifiers, payer-facing data products, and multi-payer utilization management, stakeholders reasonably worry that the program’s incentives are no longer simple. The Medicare beneficiary and the reasonable-and-necessary standard should be central. But the broader business ecosystem may pull toward more registration, more classification, more control, more data capture, and more payer utility.
That does not mean improper conduct. It means mission creep. A tool built to support coverage can become a platform whose maintenance and expansion become ends in themselves.
7. The program’s authority is powerful but strangely hard to supervise
A central frustration is that there is no obvious line of command. MolDx is administered by Palmetto GBA, a Medicare Administrative Contractor. MACs operate under CMS contracts. Other MACs may adopt MolDx policies. CMS has national coverage authority and can issue NCDs, but day-to-day MolDx technical assessments are not run by the CMS Coverage and Analysis Group in the ordinary way. Labs can complain to MolDx. Associations can write to Palmetto, CMS, or HHS. Congress can inquire. But there is no clean lever that says: “Replace the operating model by July 1.”
This is one of the program’s most serious structural defects. MolDx has become quasi-national in effect but remains contractor-local in formal governance. That mismatch is tolerable when things work. It becomes intolerable when they do not.
The Agendia litigation history illustrates how controversial this structure can be. In a Supreme Court petition, Agendia characterized MolDx as a contractor-established assessment process that effectively determines whether certain molecular diagnostic tests can be approved for Medicare coverage and payment. That is one litigant’s framing, not a neutral judicial finding, but it captures the industry perception of concentrated contractor power. (Supreme Court)
The management problem is therefore not just inside MolDx. It is also above MolDx. CMS benefits from MolDx because it provides specialized molecular diagnostics capacity without CMS having to build a full national office for every rapidly changing technology. But if CMS relies on MolDx as a de facto national infrastructure, CMS also needs a way to impose performance standards, transparency, and corrective action.
8. What “collapse” probably means here
Collapse does not necessarily mean no one is working. In fact, collapsed bureaucracies are often full of hardworking people. The collapse is functional, not literal.
A functioning program converts inputs into outputs. MolDx receives test registrations, evidence packages, technical assessments, clinical utility arguments, and policy requests. The outputs should be timely Z-code determinations, coverage decisions, coding instructions, article updates, LCDs, denials, reconsideration decisions, or public explanations.
If inputs continue to enter but outputs slow dramatically, the program becomes a queue with scientific language attached. If question rounds multiply but decisions do not, the program becomes a review loop. If leadership explains standards at conferences but applicants cannot get closure, the program becomes a reputation-management enterprise. If broad LCDs exist but specific tests cannot get through, the program becomes a coverage mirage.
That is what appears to have gone wrong: not that MolDx lost its original purpose, but that its original purpose became too large for its governance, staffing model, operating discipline, and accountability structure.
Remedies: what could realistically be done?
The remedies are limited because the authority lines are muddy. Still, several paths exist.
1. MolDx could impose an internal decision-clock and closure rule
The most immediate remedy would be internal. MolDx could adopt a formal rule: after a complete technical assessment submission, MolDx has a defined period—say 90 or 120 days—to issue one of four outputs: approve, deny with rationale, request one consolidated set of additional information, or place the test in a named policy-development queue.
After the applicant responds, MolDx would get one further review cycle. Then it must decide. No indefinite rolling questions. No expanding interrogatories unless the applicant materially changes the claim.
This would not require CMS rulemaking. It would require MolDx leadership to value decisional finality as much as evidentiary rigor.
2. MolDx could publish test-category evidence templates
For categories like MRD, CGP, hereditary cancer, pharmacogenomics, infectious disease panels, transplant rejection, and Alzheimer’s biomarkers, MolDx could publish structured evidence templates. These would distinguish among analytical validity, clinical validity, clinical utility, intended-use population, comparator, outcome, intervention consequence, and minimum follow-up.
The point is not to lower standards. The point is to make standards previsible. Applicants should not discover the real standard through serial 60-day question letters.
3. CMS could require contractor performance metrics
CMS does not need to micromanage every technical assessment to require operational accountability. It could require reporting of median and 90th-percentile technical-assessment cycle time, number of pending applications by category, number pending over 180/365 days, number of approvals, number of denials, number of incomplete submissions, and reasons for delay.
This would convert stakeholder anecdote into governance data. If MolDx is performing well, the data will show it. If not, CMS will have evidence for corrective action.
4. CMS could create an escalation pathway for stalled molecular test reviews
There should be a defined escalation pathway when a test-specific review remains unresolved beyond a threshold. That pathway could involve Palmetto senior leadership, a multi-MAC MolDx oversight committee, or CMS coverage staff. The appeal should not be “please approve me.” It should be “please determine whether the process has become procedurally unreasonable.”
That distinction matters. CMS need not substitute its scientific judgment in every case. But CMS can supervise whether a contractor’s process is timely, clear, and administratively fair.
5. Associations could send a targeted operational letter, not a vague grievance letter
A lab association letter to HHS or CMS should avoid sounding like “we want easier coverage.” The stronger letter would say:
MolDx has become essential Medicare infrastructure, but its current operating model lacks transparent cycle times, closure rules, escalation pathways, and public performance metrics. We request CMS oversight of process reliability, not a weakening of evidence standards.
That is a more credible position. It preserves Medicare’s need for rigor while focusing on the real defect: operational accountability.
6. MolDx could separate scientific policy leadership from operations leadership
The program may need a visible chief operating officer function. Scientific leaders should set standards, review complex evidence, and speak externally. But someone else should own queue management, reviewer productivity, applicant communications, deadlines, dashboards, and backlog reduction.
In mature organizations, the best scientist is not automatically the best operating executive. MolDx may need both.
7. LCDs should be updated more often, but only where they reduce hidden review
More LCDs are not automatically better. A proliferation of LCDs could create even more bureaucracy. But when a category is generating dozens of applications and repeated review loops, that is a signal that the standard is not sufficiently public. In those areas, MolDx should either revise the LCD/article framework or publish detailed subregulatory guidance.
MRD is the obvious example. If many tests are seeking coverage for specific tumor types, surveillance settings, recurrence-risk scenarios, and therapy-monitoring claims, the program needs a transparent matrix. Otherwise, each application becomes a private negotiation over first principles.
8. CMS could nationalize certain high-impact categories
For areas like broad tumor CGP, MRD, or Alzheimer’s blood biomarkers, CMS may eventually need national coverage architecture. The NCD for NGS in cancer shows that CMS can intervene when a technology class becomes too important for fragmented local handling. The current NCD grew out of the rapid expansion of NGS oncology testing and FDA-approved companion diagnostics, with national policy taking shape around 2017–2018. (CMS)
The risk, however, is that NCDs can be rigid and slow. Nationalizing every molecular test category would be a mistake. A better approach may be selective: CMS should nationalize only those categories where MolDx’s quasi-national local process is producing high-stakes bottlenecks.
The core diagnosis
MolDx was built to solve a real Medicare problem, and for years it did. But the program appears to have evolved from an expert workaround into a de facto national molecular diagnostics infrastructure without a matching governance model.
The founding model was expert judgment. The mature model should have become transparent expert administration. Instead, the program may now sit awkwardly in between: too large for founder-style discretion, too opaque for national policy, too powerful for ordinary contractor informality, and too procedurally elastic for companies trying to bring specific tests through coverage.
The remedy is not simply “be nicer to labs” or “approve more tests.” Medicare must still reject weak evidence and control abuse. The remedy is to restore the basic features of a functioning public program: clear standards, defined timelines, consolidated questions, final decisions, public metrics, escalation rights, and accountable management.
In short, MolDx does not need less science. It needs more institution.
###
Sidebar: The current MolDx leader speaks frequently in praise of the management controls and protocols that have been developed over the last five years. Yet stakeholders have a very different conception of what they are experiencing "in real world." Discuss this situation and paths forward.
###
Sidebar: When “management controls” and stakeholder experience diverge
A striking feature of the current MolDx situation is the gap between internal managerial self-description and external stakeholder experience. Leadership may sincerely believe that the program has become more professional over the past five years: more protocols, more staff, more defined review steps, more documentation, more controls, more standardized evidence expectations, and more internal signoffs. From inside the program, that may look like maturation. From the outside, however, laboratories may experience the same apparatus as delay, opacity, repetitive questioning, moving targets, and no clear path to a final decision.
This is a familiar organizational failure mode. A process can become more controlled without becoming more effective. Indeed, in complex bureaucracies, controls can multiply precisely because performance is deteriorating. More forms are added because prior submissions were inconsistent. More review layers are added because decisions were controversial. More templates are added because reviewers asked different questions. More internal meetings are added because the organization wants alignment. Each addition may be rational in isolation. Taken together, they can produce a system that is more defensible on paper but less capable of delivering timely outcomes.
The key distinction is between process control and process performance. A leader may point to protocols: every file is logged, every question is documented, every review is assigned, every evidence domain is checked, every decision is internally vetted. Stakeholders are measuring something else: How long did it take? Were the standards clear at the start? Did the questions converge or expand? Did the reviewers understand the intended use? Did the program reach a decision? Was there an appealable rationale? Could the company plan around the result?
Both sides may be telling the truth. MolDx may indeed have more management controls than it had five years ago. And stakeholders may also be right that the real-world operating experience has worsened.
The deeper problem is that management controls often serve the organization before they serve the customer, applicant, beneficiary, or public mission. Internal controls reduce internal risk. They make the file look complete. They help defend against accusations of inconsistency. They allow leadership to say the process is rigorous. But if the controls do not also produce predictable cycle times, consolidated questions, clear endpoints, and final decisions, they are not a management system. They are a protective shell.
For MolDx, this distinction matters because the program is not merely an internal payer function. It is a gatekeeper for Medicare access to molecular diagnostics. A stalled technical assessment is not just an administrative inconvenience. It affects laboratory investment, clinical adoption, physician confidence, beneficiary access, and the credibility of Medicare’s coverage process. A review loop that feels internally rigorous may function externally as unannounced noncoverage by delay.
One path forward is for MolDx to redefine excellence around externally observable performance, not internal procedural sophistication. The program should not ask, “Do we have a protocol?” It should ask, “Does the protocol produce timely, consistent, explainable decisions?” That implies public or semi-public metrics: median time from complete submission to first response; median time from response to final decision; number of applications pending over 180 days; number pending over one year; number approved, denied, withdrawn, or still cycling; and the most common reasons for delay. Without such metrics, claims of good management remain largely self-referential.
A second path is to impose a consolidated-question discipline. Stakeholders complain most bitterly when questions expand over time: two pages, then six pages, then more. A mature review process should identify the core deficiencies early. There may be exceptions, especially when an applicant changes its intended-use claim or submits weak/confusing data. But the default should be one comprehensive request for additional information, one applicant response, and then a decision or a clearly defined final step. Endless rounds create the impression that MolDx itself does not know what standard it is applying.
A third path is to create decision closure categories. Not every test will be approved, and not every application deserves indefinite refinement. MolDx could use clearer endpoints: approved; denied because evidence is insufficient; not reviewable because intended use is unclear; deferred pending public LCD reconsideration; or held because a category-wide policy is under development. This would be far better than the current apparent limbo, where applicants may feel they are neither approved nor denied, but simply suspended.
A fourth path is to distinguish scientific disagreement from operational failure. Some stakeholder anger reflects disagreement with MolDx’s evidence standards. That is inevitable. Medicare should not cover every test a company believes is useful. But other anger reflects process pathology: unclear expectations, slow reviews, repeated questioning, and no final answer. MolDx should be willing to say, “We will not weaken our evidence standards, but we will improve the reliability and transparency of the process.” That would put the argument on firmer ground.
A fifth path is external oversight. Because MolDx has quasi-national importance, CMS should not treat complaints as ordinary contractor grumbling. CMS could require performance reporting, establish an escalation route for stalled reviews, audit technical-assessment cycle times, and ask whether broad LCDs plus private test-specific reviews are functioning as intended. This would not require CMS to approve individual tests. It would require CMS to supervise the fairness and efficiency of the contractor process.
Finally, industry should frame its response carefully. A letter saying “MolDx is too strict” is easy to dismiss. A letter saying “MolDx lacks transparent cycle times, decision endpoints, consolidated information requests, public performance metrics, and escalation procedures” is harder to ignore. The strongest critique is not that MolDx has standards. The strongest critique is that its standards are experienced as procedurally unstable and operationally indefinite.
The best summary is this: MolDx leadership may be proud of having built a more elaborate machine. Stakeholders are saying the machine does not reliably produce decisions. The path forward is not another layer of process, but a shift from control culture to performance culture: fewer loops, clearer standards, measurable timelines, accountable closure, and a public recognition that rigor without throughput is not good management.
##