Tuesday, November 25, 2025

CDX USA vs EU and CHANGES

 

CDX USA vs EU and CHANGES

In practical terms, this move means that for a big slice of oncology CDx, the U.S. pathway is about to become materially lighter than the EU IVDR pathway, even though both systems still ask for broadly similar science.

Below is a sponsor-eye view: what you actually have to do, who you talk to, how long it tends to take, and how much pain is involved — “recent state” vs “future state”.


1. Recent reality (last few years): both sides are punishing, but in different ways

United States – CDx as full PMA

Until this reclassification, an oncology NAAT/NGS CDx tied to a drug almost always meant Class III + PMA:

  • Core submission: a full PMA, often co-developed with the drug. This bundles:

    • Large clinical bridging datasets aligning the assay readout with drug benefit (response or PFS/OS) in a registration trial or a carefully curated external dataset. FDA Access Data+1

    • Full analytical performance (LoD, precision, reproducibility, interference, etc.).

    • Software validation, bioinformatics pipeline verification, manufacturing and QC detail.

  • FDA interactions: multiple pre-subs, then a PMA review with panel risk for novel mechanisms, plus supplements for label changes, indications, biomarkers, platforms, etc.

  • Time and money (order-of-magnitude, not a formal rule):

    • User fees: FY 2025 standard PMA fee is about $541k, versus about $24k for a 510(k). Federal Register+1

    • Timelines: statutory PMA review clock is 180 days, but with at least one major “AI” cycle, the practical window is often 12–24 months from first serious interaction to approval for a complex oncology CDx.

    • Internal cost: large multi-year teams; it’s credible when sponsors complain of “multi-year, multi-tens-of-millions” PMA CDx programs.

In other words, U.S. CDx PMA has been the archetypal “four-year, 100,000-page” ordeal.


European Union – IVDR CDx: Notified Body plus EMA

With IVDR fully in force for new CDx:

  • Risk class is fixed at Class C for all CDx. EFPIA

  • What the sponsor actually does:

    • Builds full technical documentation (performance evaluation = scientific validity + analytical + clinical performance) and full QMS dossier for a Notified Body (NB). EFPIA+1

    • The NB must seek a scientific opinion from EMA or a national competent authority on the suitability of the CDx for the medicinal product (Art. 48(3)–(4) IVDR). European Medicines Agency (EMA)+1

  • Consultation mechanics:

    • EMA consultation is officially 60 days, extendable to 120 days, with no clock stops once the consultation has begun. TÜV SÜD

    • In practice, the total NB + EMA cycle is heavily impacted by NB capacity; sponsors widely report 12–24+ month journeys, especially during the transition bulge for IVDR. Lindus Health+1

  • Costs:

    • EMA charges a specific CDx consultation fee, and NBs charge substantial fees for QMS and technical doc review. European Medicines Agency (EMA)+1

    • MedTech Europe’s 2025 burden analysis suggests that re-certification of one IVD QMS certificate averages ~€168k in NB fees alone, and technical doc reviews for Class C devices are in the tens-of-thousands per product, not counting internal costs. MedTech Europe

So under IVDR, Europe has moved from relatively light IVDD treatment to a system that is robustly heavy for CDx: NB + EMA, structured performance evaluation, and expensive, slow capacity-constrained conformity assessment.


Practical comparison recently (pre-reclassification)

For a sponsor with a new oncology CDx tied to a novel drug:

  • FDA PMA was usually the single heaviest, deepest submission – especially for first-in-class indications.

  • IVDR CDx was no longer “easy Europe”: similar demands on evidence, plus a more bureaucratic route (device authorities and EMA; NB scarcity; separate fees).

A reasonable practical view the last couple of years has been:

“Both FDA and IVDR are painful for CDx. The U.S. PMA is more intense document-by-document; IVDR is more structurally clunky and slower because of NB/EMA logistics.”


2. Future state for oncology NAAT/NGS CDx under the new FDA rule

The proposed FDA rule says: for “Nucleic Acid-Based Test Systems for Use with a Corresponding Approved Oncology Therapeutic Product”, FDA will:

  • Create a new Class II regulation with special controls, and

  • Allow future devices of this type to follow the 510(k) pathway instead of PMA. Federal Register+1

What changes in practice for a sponsor?

(a) Type of submission and depth of fight

  • Instead of a bespoke PMA, the sponsor builds a 510(k) referencing a well-defined device type and special controls:

    • Robust analytical validation (on par with what PMA CDx already showed).

    • Clinical support can lean heavily on:

      • Prior PMA CDx in the same biomarker/drug space, and/or

      • Literature and drug-side data, as long as the sponsor shows the test is “equivalent enough” analytically to an established assay.

  • FDA’s debate space shifts from “Is this entire benefit–risk story acceptable?” to “Have you demonstrated substantial equivalence under these special controls?” – typically fewer cycles and narrower arguments.

(b) Timelines and predictability

  • 510(k) statutory review goal is 90 days; oncology CDx will still have intense review, but under a framework that’s built for shorter, more standardized cycles than PMA.

  • There will still be tough questions on cut-offs, bioinformatics, and labeling, but they’re asked against a template; they are less likely to become multi-year open-ended negotiations.

(c) Fees and internal cost

  • User fee drops from PMA-scale (about $541k) to 510(k)-scale (about $24k in FY 2025). Federal Register+1

  • Internal sponsor costs should also compress because:

    • You no longer design a full PMA program every time.

    • You can reuse well-understood validation architectures and study designs that map directly to the codified special controls.

(d) Change control

  • Changes that previously demanded PMA supplements (platform changes, panel extensions, cut-off refinements) may, in many cases, be handled via:

    • 510(k)s for significant changes, or

    • Letter-to-file / internal documentation for smaller changes that don’t affect intended use or core performance claims.

  • By contrast, IVDR’s current guidance keeps many CDx changes inside formal NB + EMA re-review tracks, with new certificate supplements and defined triggers. Pure Global+1


3. Practical head-to-head: “What would a sponsor feel?”

Scenario: Follow-on EGFR or MSI NAAT CDx for oncology

In the U.S. – after reclassification:

  • Sponsor prepares:

    • Full analytical package per special controls.

    • Focused clinical comparability to an existing FDA-accepted assay / prior PMA CDx.

    • 510(k) dossier structured directly against the new regulation’s special controls.

  • Pays a 510(k) fee, targets a ~90-day review, maybe one substantial AI cycle.

  • Negotiates labeling within an existing labeling paradigm (“for selection of patients for X”).

In the EU – under IVDR:

  • Sponsor still:

    • Builds full Class C technical doc.

    • undergoes NB review, plus:

    • NB triggers EMA or NCA consultation to confirm suitability with the medicinal product.

  • Faces:

    • EMA timeline of up to 60–120 days once consultation starts TÜV SÜD

    • NB queues and iteration that often push overall duration well past a year.

  • Pays:

Net practical comparison, future state:

  • For mature oncology CDx technologies in the FDA’s new category, the U.S. will now be the faster, cheaper, and more standardized route for new entrants and follow-ons.

  • Europe remains structurally “high friction” for CDx under IVDR: NB + EMA, capacity constraints, and relatively rigid change-control signals.


4. Bottom line: what happens to “harmonization” in the real world?

  • Previously (PMA vs IVDR CDx), both sides were heavy; one could argue the systems were practically comparable in overall burden, albeit in very different ways.

  • After this FDA reclassification for oncology NAAT/NGS CDx, the situation changes:

    • Scientific expectations remain broadly aligned (good analytical and clinical validation; strong drug–test linkage).

    • Operational burden diverges:

      • U.S. becomes a Class II / 510(k) regime with codified special controls, shorter timelines, lower fees, and more flexible change management for this family of CDx. Federal Register+1

      • EU CDx under IVDR stays Class C with NB + EMA consultation, higher external fees, and structurally slower certification and recertification. EFPIA+2TÜV SÜD+2

For a policy-savvy reader, the practical takeaway is:

In scientific terms, FDA and IVDR are speaking the same language about what CDx should prove. In operational terms, this FDA move makes the U.S. pathway materially lighter than Europe’s for oncology NAAT/NGS CDx — effectively de-harmonizing the workload, even as the evidentiary ideals converge.

You said:
OK that's great. Can you also summarize this more concisely which makes it a little easier for the reader to absorb in one reading. Keep key facts but more in paragraph format and not so long as you just did. i will use both, in different places.
ChatGPT said:

Here is a more concise, paragraph-style summary that preserves the key policy and practical points.


Concise Practical Comparison: FDA’s New CDx Pathway vs. IVDR

Under the EU IVDR, all companion diagnostics are locked into Class C, requiring a full technical file, Notified Body review, and a mandatory EMA (or national authority) consultation on the drug–test linkage. In practice, this produces long, multi-step reviews often running 12–24 months, driven partly by Notified Body capacity constraints. Costs are substantial: NB technical-file reviews, EMA consultation fees, and heavy internal preparation. Change control also remains strict; many modifications trigger new formal reviews. IVDR has therefore become a high-friction, high-burden system for CDx, even when the underlying assay technology is mature.

Historically, the FDA PMA pathway for oncology CDx was similarly burdensome: sponsors routinely submitted massive analytic and clinical datasets, endured multi-year PMA cycles, and managed ongoing PMA supplement obligations. The process could be exacting and expensive, with user fees in the hundreds of thousands of dollars and multi-year internal program costs.

The new FDA proposal fundamentally changes this equation for nucleic-acid based oncology CDx. After reviewing 17 PMAs and a decade of experience, FDA concludes that these technologies are now well understood, with risks fully characterizable through special controls. As a result, these CDx will move from Class III (PMA) to Class II (510(k)). The required evidence—robust analytical validation and a clear demonstration of clinical relevance—remains rigorous, but the submission is now structured against a defined regulatory template with narrower debates than PMA. User fees fall dramatically, and review timelines align with the 90-day 510(k) process, even if oncology CDx will receive close scrutiny.

Practically, this means that for follow-on EGFR, BRAF, MSI, LOH, or similar oncology NAAT/NGS CDx, the U.S. will become materially faster, cheaper, and more predictable than Europe. The scientific expectations of FDA and IVDR remain broadly aligned—both still require strong analytical and clinical performance, drug-test linkage, and clear labeling—but the operational burden diverges sharply. IVDR continues to function as a high-cost, multi-agency process; FDA is reclassifying mature CDx technology into a streamlined, mid-tier regulatory track.

The net effect is that scientific harmonization persists, but regulatory workload is no longer harmonized. FDA’s move aligns the evidence standards but pulls away from Europe in efficiency, creating a lighter U.S. pathway for many oncology CDx than the IVDR can presently offer.

AI Talks About Lab Mergers

See prompt at bottom.   Please note this isn't "my opinion," but rather, an experiment in giving Chat GPT5 a task and seeing how it uses the internet, how it organizes an essay, etc.

[Chat GPT often provides multi-citations together "+2", "+4."  Only the first one cuts-pastes into html.]

#####

AI Talks About Lab Mergers

The short version is: Big diversified med-techs keep buying oncology labs because the strategic story is intoxicating, even though the operating reality (payer-facing lab services) is grinding, local, and hard to scale. The case studies are clear, but the people making the next deal are often working from a different set of incentives and a conveniently selective memory.


1. Today’s set-up: Abbott–Exact as the new “platform” bet

Bloomberg and others are reporting that Abbott is nearing a deal to acquire Exact Sciences, with EXAS up ~20–25% on the news and ABT off a couple of points.Investors+2Investing.com+2 The rumored logic is familiar:

  • Abbott gets instant entry into oncology diagnostics and multi-cancer early detection, a space it mostly sits outside today.Investing.com+1

  • Exact brings branded tests with real scale and payer coverage (Cologuard, Oncotype DX), plus a salesforce that already calls on primary care and oncology.Investors+1

  • Abbott can, in theory, plug that into its global footprint, manufacturing muscle, and balance sheet.

On a PowerPoint slide, that’s irresistible: a $200B global platform buying a $10–20B “growth engine” and cross-selling cancer screening to everyone who already buys i-STAT cartridges and Libre sensors.

The question you’re asking is: haven’t we seen this movie before? Yes—GE/Clarient, Novartis/Genoptix, Roche/Foundation—and the third act is rarely what was pitched in the board deck.


2. Case study 1 – GE buys Clarient: from “molecular oncology platform” to carved-out asset

In 2010, GE Healthcare agreed to acquire Clarient, a cancer-focused molecular diagnostics and reference lab, for about $587M.BioSpace+2MassDevice+2

The strategic narrative:

  • Build a billion-dollar oncology diagnostics business, integrating GE’s imaging with Clarient’s molecular workups.labmedica.com+1

  • Own the “diagnostic continuum” from image to biomarker to therapy selection.

  • Cross-sell Clarient tests through GE’s global installed base.

What happened:

  • Clarient stayed essentially a U.S. reference lab business, grinding through pathology cases and payer authorizations, not an integrated imaging-genomics platform.

  • By 2015, GE sold Clarient to NeoGenomics for up to $275M (cash and stock)—roughly half the original headline value.NeoGenomics, Inc.+1

You don’t usually halve your purchase price five years later if the original thesis is working.


3. Case study 2 – Novartis buys Genoptix: strategic adjacency that never quite fit

Novartis bought Genoptix in 2011 for about $470M to “strengthen personalized medicine” and build a hematopathology diagnostics platform around its oncology portfolio.PR Newswire+2Reuters+2

Again, the narrative:

  • Use Genoptix to support Novartis’ heme-onc drugs with better diagnostics.

  • Create a center of excellence in oncology testing and informatics.

Reality:

  • Genoptix remained a commercial lab with U.S. oncologist customers, dealing with coverage, coding, and collections—not a neat companion-diagnostic bolt-on to Novartis’ pharma machine.

  • In 2017, Novartis sold the lab business to Ampersand and 1315 Capital for an undisclosed amount, widely seen as a retrenchment from the original strategy.Inside Precision Medicine+2PR Newswire+2

  • In 2018, NeoGenomics acquired Genoptix from those investors for $138.7M plus stock, showing how much value had evaporated from the original $470M check.The Wall Street Journal+1

So: a half-billion-dollar strategic jewel turns into a mid-hundreds-of-millions community lab after a few trips through financial engineering.


4. Case study 3 – Roche and Foundation Medicine: real strategic value, but a goodwill bonfire

Roche first took a controlling stake in Foundation Medicine in 2015, then bought out the remainder in 2018 under a deal framed as accelerating global availability of comprehensive genomic profiling in oncology.Roche Assets+2Foundation Medicine+2

Unlike Clarient or Genoptix, FMI did achieve real strategic integration:

  • FMI’s CGP reports are deeply embedded in Roche’s oncology trials and commercial strategy.

  • FMI became an anchor asset in the global precision-oncology conversation.

But even here, the economics have bitten back. In Roche’s 2023 financials, they booked ~$0.6B of goodwill impairment related to the Foundation Medicine acquisition, effectively writing off the entire goodwill balance.Roche Assets+2Roche+2

Translation: even when the strategic fit is real, the price and underlying lab economics can’t sustain the level of optimism that was capitalized at acquisition—especially once COVID bump revenue, changing oncology pricing, and payer pushback wash through.

Case Study 4 - Biotechne Acquires Exosome DX

  • In June 2018, Bio-Techne announces it would acquire Exosome Dx for $250m in cash and up to $325M in further bonus payments.  Here.
  • In August 2025, Bio-Techne divested Exosome assets to mdxhealth for ~$5MHere.
  • The CEO remarked, "Bio-Techne will strengthen its foundation as a developer of innovative, high value, high margin products, tools and reagents for the life science research..."


5. What all three deals had in common

Across Clarient, Genoptix, and Foundation, the pattern looks very familiar from your world:

  1. Mismatched business models.
    The buyer is used to selling products (instruments, reagents, drugs) with gross margins in the 60–80% range, paid by hospitals and distributors. The target is a service lab with 35–55% margins, heavy labor, and every dollar subject to prior auth, denials, and MAC policy nuances. The acquirer mentally treats the lab like a “platform technology,” but post-close it behaves like…a CLIA lab.

  2. Overestimation of scalable “synergies.”
    Corporate strategy decks assume: “We’ll just run this through our global salesforce” or “we’ll bundle it with our drugs.” In practice, oncologists already have reference lab loyalties, hospital systems have entrenched contracts, and payers don’t care that you’ve rebranded the lab with a BigCo logo. GE never turned Clarient into an imaging–omics juggernaut; Novartis never made Genoptix the default companion lab; Roche found that even FMI’s halo doesn’t exempt it from reimbursement grind.

  3. Underestimation of payer friction.
    When you live in boxes-and-reagents, the payer is “behind the glass,” paying DRGs and test panels. When you own a patient-billed molecular lab, you’re now swimming in LCDs, MolDx tech assessments, prior auth vendors, and appeals, and your growth is constrained by coverage and utilization management, not by how many oncologists your reps can see. That reality rarely appears in the glossy M&A pitch.

  4. Cultural and operational clash.
    Lab businesses are messy, manual, and physician-centric; BigCo is matrixed, SAP-driven, and compliance-centric. Efforts to “standardize” or “globalize” the acquired lab often slow it down and erode what made it competitive—turnaround time, pathologist relationships, willingness to do bespoke work.

  5. Executive turnover and memory loss.
    The teams who championed Clarient and Genoptix at GE and Novartis weren’t the ones six years later deciding to exit at a discount. The institutional narrative resets: the original “must-have strategic platform” becomes “non-core asset” when leadership changes or investor pressure shifts.

  6. Accounting illusion vs. operational reality.
    Goodwill is just the capitalized difference between what you paid and the tangible/intangible assets. For a while, high growth masks any mismatch. Then growth slows, payers get tougher, and the auditors look at the forecast and say: this goodwill is no longer supportable (cue Roche’s $0.6B FMI impairment).Roche Assets+1


6. So why do they keep doing it, and what does that say about Abbott–Exact?

Given that track record, why might Abbott still write a very large check for Exact?

  • Strategic narrative pressure. Oncology diagnostics and MCED are framed as “the future of health care”. Boards and investors want a story about being in that future. For Abbott—strong in chronic disease monitoring, less present in oncology—Exact is a turnkey way to buy that narrative.

  • Platform FOMO. Competitors have planted their flags: Roche/FMI, Illumina/GRAIL (even if structurally messy), Thermo in tissue-based CDx. There’s a persistent fear that if you don’t buy now, someone else will, and you’ll be locked out of the next decade’s growth.

  • Real, visible revenue vs. early-stage bets. Cologuard, Oncotype, and the broader Exact portfolio already generate billions in revenue with established coverage and codes, unlike earlier transactions where the lab’s future was mostly speculative.Investors+1 This makes it much easier to justify a high price inside a DCF, even if you quietly under-model payer risk.

  • Optionality logic. From a corporate-finance viewpoint, a big acquisition is often justified as buying a “call option” on upside scenarios (e.g., MCED really breaks out, guidelines incorporate Exact tests more broadly). The fact that Clarient/Genoptix/FMI didn’t ultimately earn their goodwill doesn’t mean this option logic goes away—it just gets re-applied to the next shiny asset.

  • Different reference class. Internally, teams don’t show a slide labeled “Our Past Oncology Lab Acquisitions That Didn’t Work.” They compare the deal to successful platform buys (e.g., Alere for Abbott in point-of-care, or St. Jude/Guidant-type device roll-ups), not to Clarient and Genoptix. Selective benchmarking is a hell of a drug.

For Abbott specifically, there is at least one meaningful difference vs. the earlier case studies:

  • Abbott is already in the diagnostics services reimbursement trench, albeit more through products (e.g., Alinity) than through national oncology labs. They have more muscle memory around CLFS dynamics than GE or Novartis did in 2010–2011.

  • Exact’s big franchises [Cologuard and Oncotype] are deeply embedded in U.S. screening and oncology practice with defined coding and coverage; this isn’t a “we’ll get coverage someday” bet in the same way Clarient’s or even early-FMI’s pipelines were. [BQ, major point!]

But the core risk you’re flagging remains: Abbott is a box-and-reagent company contemplating a large, payer-facing oncology lab whose economics live and die on Medicare LCDs, commercial prior auth rules, and subtle shifts in guideline language. If they over-romanticize the “Abbott + Exact” synergy story and underweight the MolDx/BCBS grind, they are absolutely at risk of becoming the next case study you cite on a slide in 2030.


###

###

Again: This is a biased prompt to test the result.

Today, the big life sciences news is that ABBOTT may buy EXACT SCIENCEs (market caps $200B and $20B respectively.) I work in molecular pathology policy, and selling to payers (Blue Cross, Medicare, etc) is much harder than selling little boxes wholesale and cashing a check. Consider the histories of GE buying CLARIENT and NOVARTIS buying GENOPTIX. I understand there was even a big write-down, eventually, after ROCHE buying FOUNDATION MEDICINE. Use agentic mode and internet research of the business cases and histories and write it up. Why does this keep happening when the case studies seem so clear.

Friday, November 14, 2025

Christine Bump and Shannon Bennet on AI meets LDT

 

https://www.youtube.com/watch?v=mqCxE1tz1lM

Autotranscript at YouTube

Summary

#

The webinar opens with moderator Xander van Wijk from the ADLM Innovation and Technology Division explaining that the session will focus on the intersection of artificial intelligence and laboratory-developed tests (LDTs) in clinical diagnostics, a topic many labs are anxious about post–LDT final rule and lawsuit. He introduces the two speakers: regulatory attorney Christine P. Bump, who has worked on LDT issues since 2004 and advises labs, academic medical centers, and companies on FDA strategy, and Shannon Bennett, a regulatory affairs leader and Mayo Clinic assistant professor with deep experience in clinical test development, verification, validation, and quality systems. The stated learning objectives are to define the boundaries of FDA oversight for LDTs versus regulated devices like software as a medical device (SAMD), to identify when AI/ML software becomes a medical device, and to describe risk-mitigation strategies for introducing AI into LDT and broader lab workflows.

Christine begins by “level setting” what an LDT is and, just as important, what it is not, especially in light of the recent litigation. She reviews that since around 1992 FDA asserted that LDTs were medical devices but said it would exercise “enforcement discretion”—claiming authority but mostly choosing not to regulate. This culminated in the 2024 LDT final rule, which explicitly declared that LDTs are devices by modifying the regulatory definition of in vitro diagnostic (IVD) devices. That rule was challenged by ACLA and AMP, and in March 2024 Judge Jordan in the Eastern District of Texas sided with them, ruling that LDTs are not “devices” under the Food, Drug, and Cosmetic Act. The crux of his reasoning was the statutory device definition: a device must be a tangible, physical product introduced into interstate commerce. He found LDTs to be categorically different—services performed within a single lab rather than tangible goods shipped as kits.

Christine explains that, in the decision, the judge effectively gives a functional definition of an LDT: a single laboratory develops its own method and process, receives a specimen on a physician’s order, tests the specimen using that internally developed protocol, and issues a result. The protocol and method never leave the lab. That’s what makes it a laboratory service rather than a device or kit; only the test result leaves the lab. This fits with FDA’s own historical distinction between IVD kits—where a manufacturer can put reagents, protocol, and components in a box and ship it—and LDTs, where no tangible product is shipped. Although FDA had argued LDTs were also devices, the court rejected that. After Judge Jordan vacated the LDT final rule, FDA had until late May to appeal and chose not to. Then, in September, the agency formally rescinded the rule, deleting the ten words that had added “including when the manufacturer of these products is a laboratory” to the IVD definition. That put the regulatory text back to the pre–May 2024 IVD definition in 21 CFR 809.3.

Christine stresses that while many are rightly celebrating that FDA cannot independently regulate LDTs as devices, this does not mean FDA has lost all leverage over labs. The agency retains its full general device authority. She walks through three main regulatory “hooks” that still touch LDTs. First, specimen collection kits are separately regulated devices, often Class I or Class II with 510(k) requirements. If a lab promotes its LDT in a way that bundles or implies use of a particular collection kit, FDA can act against the kit, effectively reaching the LDT indirectly. Second, many LDTs use components that are themselves regulated devices: analyte-specific reagents (ASRs) have their own device classification because FDA determined in the 1990s that they independently meet the device definition, and RUO reagents are subject to strict rules on permitted uses and promotional claims. A lab that misuses or mis-promotes ASRs or RUOs can draw FDA enforcement even if the overarching LDT is outside the device framework. Third, software used in testing can be either a component of a test or a standalone medical device. Christine notes that device law defines a device broadly enough to include software, and FDA has also been given explicit authority over certain software functions by the 21st Century Cures Act. The Texas LDT decision cannot be generalized to argue that software is not a device, because FDA’s software authority does not rest solely on the LDT/device argument.

Shannon then takes over to define software as a medical device and distinguish it from software embedded in instruments. Software as a medical device (SAMD) is software that, by its intended use, is meant for diagnosis, prevention, monitoring, treatment, or alleviation of disease—or to assist with diagnosis, screening, prognosis, or prediction—by analyzing data rather than specimens. A critical concept is “manufacturer”: whoever designs and controls the software has the regulatory responsibility. If a lab builds its own AI pipeline, the lab becomes the manufacturer and owns the regulatory burden; if the lab purchases a commercial tool, the vendor typically carries that burden unless the lab modifies or co-develops it. Shannon distinguishes SAMD from “software in a medical device” (SiMD), which is software that only functions in the context of a specific instrument, like the software embedded in a flow cytometer or PCR platform. In those cases the instrument manufacturer is usually responsible for both hardware and software as a single regulated device.

Shannon explains that the 21st Century Cures Act carved out certain categories of software that are not considered SAMD. These include administrative support software for billing, scheduling, and business analytics; “healthy lifestyle” apps that track steps, diet, or fitness without diagnostic claims (though blood pressure tracking is becoming more scrutinized); electronic records systems like LIS and EHRs that merely store and display data without independent interpretation; and basic data management tools. There is also a special category, clinical decision support (CDS), where software could conceptually be a device but FDA is deprioritizing enforcement if it meets four criteria. To qualify as CDS, the software must not acquire, process, or analyze medical images or instrument signals (a hurdle most lab AI fails because they analyze slides or signals); must not directly display or manipulate detailed patient medical information in a diagnostic way; must provide recommendations rather than dictating decisions; and must be independently reviewable, with transparent logic rather than a black box. If the software fails any of these, it falls outside CDS and is more likely treated as SAMD.

To make this concrete, Shannon offers practical lab examples. Software to document PTO requests is clearly administrative. Simple Excel spreadsheets used to track reagent lots are just data storage. Software that drives a flow cytometer and performs analysis is software in a medical device, regulated as part of the instrument. On the other hand, next-generation sequencing (NGS) bioinformatics pipelines that process signals from multiple instruments or tests and generate interpreted reports are strong candidates for SAMD, and FDA explicitly refers to them in CDS guidance as typically not qualifying for CDS. Software that automatically orders antibiotics based on susceptibility results with no human review, or a black-box algorithm producing colon cancer risk scores without explainability, are also likely to be treated as SAMD because they directly influence treatment and lack transparency.

Christine returns to explain how FDA applies its existing risk-based device framework to software and AI. The agency continues to use three device classes: Class I (low risk, often exempt from premarket review but subject to quality system requirements); Class II (moderate risk, generally requiring 510(k) review); and Class III (high risk requiring PMA). Low-risk software includes general wellness apps that do not reference disease; moderate-risk examples include tools that monitor parameters like heart rate and feed information into clinical decision-making; high-risk examples include AI systems used for cancer detection or HIV-related decisions. She points out that the classical device framework assumes relatively static products—once cleared or approved, substantive changes trigger new submissions. That model simply doesn’t work for learning systems; AI and ML are designed to change as they ingest new data, and many of those changes could affect safety and effectiveness.

Recognizing this, FDA has shifted toward a “total product life cycle” approach for AI-enabled software, built out largely through guidance documents. Christine describes three main components. First, predetermined change control plans (PCCPs), for which FDA has issued final guidance for AI-enabled device functions. When a sponsor seeks clearance or approval, they also submit a plan describing what aspects of the model may change over time and how those changes will be validated and controlled, effectively pre-approving a bounded evolution of the algorithm. Second, life cycle management guidance (currently in draft form) addresses how user interfaces, risk profiles, and data handling strategies should be managed as software evolves. Third, FDA is placing heavy emphasis on real-world evidence (RWE), seeking to understand how AI-enabled devices perform post-market and even proposing structured ways to measure and evaluate AI tools in real-world settings; the agency has recently requested public comment on such frameworks.

Christine then outlines how FDA assesses software risk more generally. Key dimensions include the intended use (what exactly the software does; whether it is adjunctive or determinative; whether it targets high-risk diseases like cancer or HIV), the level of control over care decisions (is it supporting or replacing human judgment), who uses it (patients vs clinicians, inside or outside controlled settings), the nature of the functionality (just organizing data vs analyzing and recommending treatment), and how it is integrated (standalone or deeply embedded in complex systems). For AI and ML specifically, she notes that FDA has repeatedly highlighted concerns about lack of transparency and explainability (black-box models that clinicians cannot interpret), data drift (performance degradation over time as real-world data shifts), data bias (training data not representing all relevant populations), cybersecurity vulnerabilities (especially for network-connected tools handling medical data), and integration effects (a model that is safe in one system may behave differently when integrated into another environment, creating new safety issues).

In the final substantive section, Shannon focuses on what all this means practically for laboratories that are developing or co-developing software. His central message is that everything starts with the intended use statement, and that intended use must be objective, “boring,” and match what the lab can actually prove. He contrasts a fictional, over-the-top intended use—an AI that “automatically diagnoses all major diseases with 99.9% accuracy,” replaces doctors, and ushers in the “world of individualized medicine”—with a more realistic example: software that assists lab professionals and clinicians by organizing and displaying test results and highlighting values outside reference ranges, while leaving final decisions to professional judgment. The former invites impossible evidentiary burdens and high-risk classification; the latter frames the tool as a supportive aid and is more defensible. Shannon cautions against marketing language in intended use, overly precise performance claims, assertions about replacing physicians, and generic references to “life-threatening conditions” unless those are clearly justified.

Shannon urges labs that have developed or co-developed software to build robust documentation proactively rather than trying to retrofit an FDA-ready file under pressure. He describes three layers of documentation. Foundational documents include a clear intended use statement and a device description mapping data sources, interfaces, and algorithms. Ongoing documentation covers training data (what populations were used, and where biases or exclusions might lie), verification and validation work (expected vs actual outputs and associated performance metrics), and systematic change management. On change control, he stresses the risks of uncontrolled “tinkering” by enthusiastic developers and recommends formal versioning (bundling changes into releases), scoping validation appropriately (more for algorithmic changes, less for cosmetic ones), and centralizing all records—akin to a master file—so they are easy to retrieve. Standard operating procedures should cover how the software is used, how issues are handled, and how routine changes (like adding a new gene to a pipeline or extending to a new patient group) will be designed, validated, and implemented.

In the Q&A, Xander and the audience raise practical questions. One asks how NGS pipelines differ from mass spec or PCR analysis software. Shannon explains that instrument-tied software embedded in a mass spectrometer or qPCR platform is usually “software in a medical device,” with regulatory responsibility resting on the instrument manufacturer, whereas bioinformatics pipelines that aggregate signals from multiple platforms and tests and deliver interpreted outputs are more likely to be treated as SAMD—especially since FDA singles them out in its CDS guidance as typically not qualifying as CDS. Both Shannon and Christine emphasize there is a lot of gray and that labs should behave as if they might be regulated, documenting and validating accordingly. Another question asks what the actual “chance” is that FDA will consider a specific pipeline as SAMD; Christine declines to give a numeric probability, noting that from the lab’s perspective, if you are the one selected for scrutiny the risk is effectively 100%, and that FDA staff routinely review marketing, attend conferences, and respond to perceived safety or effectiveness concerns. Questions also touch on how IQ/OQ/PQ concepts apply to AI (conceptually similar to wet-lab tests: define what the system should do and confirm that it behaves accordingly), whether model calibration metrics like Brier score should be reported (any robust evidence of performance and calibration is helpful), and whether an AI that generates SOPs and validation documents would be a device (generally no, because it supports documentation and does not directly drive diagnosis or treatment). A hypothetical AI that flags IV-contaminated chemistry specimens might or might not be SAMD depending on how it ingests data (raw instrument signals versus processed numeric results) and how tightly it is integrated into decision-making. In response to a final question about whether new regulations are likely as the administration promotes AI and machine learning, Christine is clear that this space will continue to be very active: FDA has already issued an unusually large volume of guidance for software and AI compared to other areas, and more guidance—and eventually more detailed regulations under the existing device authority—are expected. The session concludes with Xander thanking the speakers, reminding attendees about continuing education credit, and noting that the recording will be posted on the ADLM artery site.

#

Here’s a detailed but organized summary of the webinar.


1. Context and Goals of the Webinar

Moderator Xander van Wijk opens the session, hosted by the ADLM Innovation and Technology Division, focused on the intersection of artificial intelligence (AI) and laboratory-developed tests (LDTs) in clinical diagnostics.

Two speakers:

  • Christine P. Bump – Regulatory attorney with ~20 years of FDA experience, especially in diagnostics, genomics, digital health, and LDT issues since 2004.

  • Shannon Bennett – Regulatory affairs leader and Mayo Clinic assistant professor with long experience in lab test development, verification, validation, and quality systems.

Learning objectives:

  1. Define regulatory boundaries of FDA oversight for LDTs and for regulated software devices (e.g., software as a medical device, SAMD).

  2. Identify criteria that classify AI/ML software as a medical device.

  3. Apply risk-mitigation strategies to AI integration into LDT and lab workflows.


2. Christine: What Is (and Is Not) an LDT, Post-Lawsuit

2.1 Historical FDA Position and the 2024 LDT Final Rule

  • Since ~1992, FDA claimed LDTs were medical devices, but under “enforcement discretion” (they said they could regulate them but usually chose not to).

  • In 2024, FDA issued the LDT final rule, explicitly stating that LDTs are devices, by modifying the regulatory definition of in vitro diagnostic (IVD).

2.2 ACLA/AMP Lawsuit and the Texas Decision

  • ACLA and AMP sued; in March 2024, Judge Jordan (E.D. Texas) ruled that LDTs are not devices under the Federal Food, Drug, and Cosmetic Act.

  • The ruling hinged on the statutory definition of a device:

    • A device must be a tangible, physical product introduced into interstate commerce.

    • Judge Jordan held that LDTs are categorically different:

      • An LDT is a service performed in a single lab using the lab’s own method and process.

      • No tangible product (like a kit) is shipped in interstate commerce.

      • Only results leave the lab; the protocol never leaves.

He effectively gave a working definition of an LDT:

  • A single lab:

    • Develops its own method and process.

    • Receives a specimen on a physician order.

    • Tests that specimen using its own protocol.

    • Releases a result.

  • Crucially, each lab’s protocol is unique, created with its own knowledge, and does not leave that lab. That makes it a service, not a shipped device/kit.

2.3 Rescission of the LDT Final Rule

  • Judge Jordan’s decision vacated the LDT final rule. FDA had until end of May to appeal and chose not to, so the rule could not be enforced.

  • In September, FDA formally rescinded the rule by deleting the 10 added words from the IVD definition:

    • They had added: “including when the manufacturer of these products is a laboratory.”

    • Removing those words restored the IVD definition to pre-May 2024, under 21 CFR 809.3.

Bottom line:
FDA cannot independently regulate LDTs as devices. But all other FDA authorities remain intact.


3. Christine: FDA Still Has Hooks into Labs via Components and Software

Christine emphasizes: labs cannot be complacent.

Even though LDTs themselves are not devices, FDA still has authority over:

  1. Specimen collection kits

    • These are devices on their own (often Class I or II; Class II usually needs 510(k)).

    • If a lab markets an LDT in a way that implicitly or explicitly includes a collection kit, FDA can go after the kit (and effectively reach the LDT via that route).

  2. Components like ASRs and RUO reagents

    • ASRs (analyte-specific reagents):

      • Are independently regulated devices with their own device classification.

      • FDA in the 1990s decided ASRs independently meet the definition of a device, based on risk seen in LDTs.

    • RUO reagents:

      • Strict rules on promotion and use; mis-promotion can trigger enforcement.

    • LDTs often use these components, so labs must stay within FDA’s rules on ASRs and RUOs.

  3. Software used in testing

    • Software can be:

      • A component of a testing process, and/or

      • A standalone medical device in its own right.

3.1 Device vs Component – Definitions

  • Statutory device definition (FD&C Act) includes:

    • Instruments, apparatus, implements, machines, implants, in vitro reagents, and any component or part, intended for diagnosis, cure, mitigation, treatment, or prevention of disease.

  • FDA regulations separately define component as:

    • Any raw material, substance, piece, part, software, or firmware intended to be part of a finished, labeled device.

Normally, components are not independently regulated; they’re part of the finished device that gets a single clearance/approval.
But if a component itself meets the full device definition, FDA can regulate it as its own device (e.g., ASRs, some software).

3.2 Why the LDT Decision Doesn’t Save Software

  • LDT decision: LDTs are not devices because they’re intangible processes in one lab.

  • Software is also intangible—but:

    • FDA’s authority over software is not limited to the LDT/device argument.

    • Software is recognized as a device both under the FD&C Act and the 21st Century Cures Act.

    • FDA has specific, separate statutory authority for certain software, including what is and isn’t regulated.

So we cannot extrapolate the LDT ruling to argue that software shouldn’t be a device. FDA’s framework for software remains intact.


4. Shannon: Software as a Medical Device (SAMD) vs Software in a Medical Device

4.1 Definition and the Role of the “Manufacturer”

Software as a Medical Device (SAMD):

  • Intended to diagnose, prevent, monitor, treat, or alleviate disease, or to support diagnosis, screening, prognosis, prediction, etc.

  • Key idea: analyzes data, not specimens directly.

Manufacturer = the entity responsible for design and regulatory compliance:

  • If your lab develops the software, your lab is the manufacturer and carries the regulatory burden.

  • If you buy software, typically the vendor is the manufacturer (unless you modify/co-develop).

  • Co-development creates gray zones; who is manufacturer must be clearly agreed upon.

4.2 SAMD vs “Software in a Medical Device”

  • SAMD:

    • Standalone software.

    • Can take inputs from many sources/tests (e.g., multiple gene results to calculate a cancer risk score).

  • Software in a Medical Device (SiMD):

    • Embedded in an instrument; only functions with that instrument (e.g., flow cytometer analysis software).

    • The instrument manufacturer usually holds the regulatory burden for both hardware and embedded software.


5. Shannon: Software Exemptions under the 21st Century Cures Act

Certain software functions are explicitly carved out and not considered SAMD:

  1. Administrative support software

    • Billing, scheduling, business analytics.

  2. Healthy lifestyle / wellness apps

    • Track diet, workouts, steps, sometimes heart rate, etc., without a diagnostic claim.

    • Note: blood pressure tracking is becoming a gray area as FDA scrutiny increases.

  3. Electronic patient records / LIS / EHR

    • As long as they store and display data without independent interpretation or analysis.

  4. Data management / databases

    • Pure data storage or basic retrieval.

5.1 Clinical Decision Support (CDS) – “Deprioritized” but Not Totally Outside FDA

Some software could be SAMD but is treated as lower regulatory priority if it qualifies as CDS. To qualify, it must pass four tests:

  1. Does not acquire/process/analyze medical images or signals.

    • Most lab AI fails here:

      • Pathology slide analysis = medical image → fails.

      • Pattern-recognizing mass spec output = signal → fails.

  2. Does not display/analyze/print detailed patient medical information beyond helping a clinician review data.

  3. Provides recommendations, not directives.

    • It aids prevention/diagnosis/treatment but does NOT directly dictate actions.

  4. Is independently reviewable.

    • The clinician must be able to understand how the software reached its recommendation.

    • Black box models fail this.

If it fails any step, it does not qualify as CDS and is more likely to be treated as SAMD.


6. Shannon: Practical Examples in the Lab

Not SAMD (generally):

  • PTO tracking software – administrative support.

  • Simple Excel sheets tracking reagent lots – basic data storage only.

    • But if Excel includes complex macros / VB code that transforms data, it may cross into SAMD territory.

  • Flow cytometer software that drives the instrument and performs analysis – software in a medical device; regulated as part of the instrument.

Likely SAMD (higher concern):

  • NGS/bioinformatics pipelines:

    • Often process signals from multiple tests and instruments, interpret results, and may be used across many assays.

    • Explicitly called out by FDA in CDS guidance as often not qualifying as CDS → more likely SAMD.

  • Software that automatically orders antibiotics based on susceptibility results with no human review.

  • Black box risk-scoring algorithms (e.g., colon cancer risk score) with no explainability and no human in the loop.


7. Christine: FDA’s Risk-Based Framework for Software & AI

7.1 Classic Device Classes Applied to Software

FDA uses the same three device classes:

  • Class I (low risk):

    • Often exempt from premarket review but must follow Quality System Regulation (QSR).

    • Example: wellness apps that do not reference disease.

  • Class II (moderate risk):

    • Typically require 510(k).

    • Example: software that monitors heart rate used by clinicians to help make decisions.

  • Class III (high risk):

    • Require PMA; strict pre- and post-market review.

    • Example: AI used for cancer detection or HIV-related decisions.

If the software references high risk indications (e.g., cancer, HIV), FDA will tend to treat it as higher risk.

7.2 Why Classic Device Rules Don’t Fit AI/ML

Traditional device frameworks assume:

  • You get a clearance/approval.

  • If you change the device in a way that may affect safety/effectiveness, you file a new 510(k) or PMA supplement.

For AI/ML:

  • Continuous learning and updating is essential.

  • Constant re-filing with FDA for every update is impossible.

So FDA has shifted to a Total Product Life Cycle (TPLC) model, heavily supported by guidance documents since 2019.


8. Christine: Total Product Life Cycle – Three Buckets

  1. Predetermined Change Control Plans (PCCPs)

    • Final guidance (Aug 18) for AI-enabled device software functions.

    • When you first seek clearance/approval, you also propose:

      • What might change as the model learns.

      • How you will manage and validate those changes (protocols, performance standards, validation plans).

    • This allows some pre-authorized evolution of the software without new submissions for every minor change.

  2. Life Cycle Management

    • Draft guidance (Jan 7; comment period closed in April, final still pending).

    • Focuses on:

      • User interface changes.

      • Risk changes over time as the model learns.

      • How increasing volumes and complexity of data are managed.

  3. Real-World Evidence (RWE)

    • FDA increasingly relies on post-market data to monitor AI tools.

    • Might require Phase 4/post-market studies.

    • In Sept, FDA requested public comment on methods to measure and evaluate AI-enabled devices in the real world (comment period open through Dec 1).


9. Christine: FDA’s Risk Assessment Dimensions for Software & AI

FDA evaluates software risk using five broad dimensions:

  1. Intended use

    • What does the software actually do?

    • Is it adjunctive or does it directly drive diagnosis/treatment?

    • Is the indication high-risk (e.g., cancer, HIV)?

  2. Risk level

    • Degree of control/influence over clinical decisions.

    • High-risk disease areas are automatically treated more cautiously.

  3. Functionality

    • Is it analyzing patient data?

    • Is it making direct treatment recommendations?

    • Who uses it (lab staff vs physicians vs patients)?

  4. User

    • Use by clinicians in controlled settings vs patients directly → patient-facing raises risk.

  5. Integration

    • Standalone vs deeply embedded in lab/hospital systems.

    • Integration can change safety/effectiveness profile depending on the environment.

9.1 AI/ML-Specific Risks FDA Cares About

FDA has highlighted several unique AI/ML risks:

  • Lack of transparency / explainability (black boxes)

    • Long-standing concern (e.g., multivariate index assays in early 2000s).

    • FDA prefers algorithms where the logic is clear or interpretable.

  • Data drift

    • Real-time learning may degrade performance if data distributions shift.

  • Data bias

    • Training data may under-represent certain populations (age, race, gender, etc.).

  • Cybersecurity vulnerabilities

    • Connected software is inherently more exposed; AI tools handling medical data raise stakes.

  • Integration issues

    • A model safe in one environment may behave differently when integrated into a new system, creating new, unforeseen risks.


10. Shannon: What Labs Should Do – Practical Risk Mitigation & Documentation

Shannon shifts to “what it means for your lab”, especially if you:

  • Develop software yourself, or

  • Co-develop with a vendor.

Goal: Be prepared so that if FDA ever appears (or if you choose to submit), you’re not scrambling.

10.1 Intended Use: The Central Pillar

Everything flows from the intended use statement:

  • Who uses the software?

  • On what data?

  • For what clinical purpose?

  • With what limitations?

He contrasts a terrible intended use vs a good one.

Bad example:

“The best AI software ever is a revolutionary AI platform that automatically diagnoses all major diseases with 99.9% accuracy using any laboratory or imaging data. It replaces the need for physician interpretation by instantly identifying life-threatening conditions and providing precise treatment pathways for each patient. The world of true, individualized medicine is upon us.”

What’s wrong:

  • Marketing language (“revolutionary,” “world of individualized medicine”).

  • Extremely specific and high accuracy claim (99.9%) – would require enormous proof and creates problems if performance drifts even slightly.

  • Claims to replace physician interpretation → high risk, no human in the loop.

  • “Life-threatening conditions” language invites regulatory scrutiny.

Shannon’s advice:

  • If you must reference performance, use “greater than X%” rather than an overly precise fixed number (e.g., “>95% accuracy”) to allow some margin.

  • Avoid hyperbole, especially about life-threatening conditions unless absolutely justified.

Better example:

“Pretty good AI software is designed to assist lab professionals and clinicians by organizing and displaying patient test results and highlighting values that differ from reference ranges. Software allows users to review underlying data and apply their professional judgment in clinical decision making.”

Key characteristics:

  • Boring, objective, factual.

  • Emphasizes assistive role and human professional judgment.

  • Makes no extreme claims.

10.2 Foundational Documentation

Labs should proactively maintain:

  1. Intended use statement (final, clean, boring).

  2. Device description:

    • How data flows through the system.

    • Interfaces – where data comes from and goes to.

    • What algorithms are used and how they transform data.

10.3 Ongoing Documentation

This is not one-and-done; it’s continuous:

  1. Training data documentation

    • What datasets were used? From which populations?

    • What populations are excluded (e.g., no pediatrics, certain racial/ethnic groups)?

    • How potential biases map to intended use:

      • If trained only on adults, it should not be intended for pediatric use.

  2. Verification and validation

    • Define expected inputs and expected outputs.

    • Run the validation protocol and document actual outputs.

    • Use results to calculate familiar metrics (sensitivity, specificity, PPV, NPV, etc.).

  3. Change management

    • Uncontrolled changes = uncontrolled software.

    • Avoid “tinkering” without documentation.

    • Use versioning:

      • Bundle changes into versions and validate them together, rather than many small undocumented tweaks.

    • Tailor validation scope:

      • A trivial UI label change may require minimal testing; algorithm changes affecting outputs require more.

  4. Centralized documentation storage

    • Keep validation reports, training data documentation, change logs, and intended use in one “master file” so you’re not hunting across desks and servers.

  5. SOPs

    • How the software is used, including troubleshooting.

    • Predetermined change control procedures:

      • How you will routinely add a new analyte or gene to a pipeline.

      • How you will extend to a new patient population, including validation steps.


11. Q&A Highlights

Several questions are discussed near the end:

  1. NGS pipeline vs mass spec or PCR software

    • Mass spec and PCR software often come as part of the instrument → typically software in a medical device; manufacturer bears the burden.

    • NGS bioinformatics pipelines:

      • Often ingest data from multiple instruments/tests, sometimes used for many assays.

      • FDA singles them out in CDS guidance as not typically CDS, so more likely SAMD.

    • There is substantial gray area; labs should assume they may be regulated and prepare accordingly.

  2. Likelihood that FDA will actually enforce on specific software

    • Answer: There’s no precise percentage.

    • FDA reviewers attend conferences, read marketing materials, and may act if they perceive safety/effectiveness concerns or risk-raising claims.

    • For the lab, if you get targeted, your risk is effectively 100%. So build your risk assessment and documentation as if you might be.

  3. Validation and IQ/OQ/PQ concepts for AI

    • Conceptually similar to wet lab:

      • Define what the system should do.

      • Verify it does so.

      • Periodically re-verify as changes occur.

  4. Model calibration and metrics like Brier score

    • Shannon isn’t specifically tied to Brier score but says:

      • Any evidence that demonstrates robustness, repeatability, and accuracy is helpful.

      • If calibration metrics add value in your context, include them in documentation.

  5. AI that generates SOPs and validation documents – SAMD?

    • Likely not SAMD, as it falls under administrative/quality documentation support rather than diagnosis/treatment.

    • As always, details matter, but this would usually fall under the Cures Act exemptions.

  6. AI to catch IV-fluid–contaminated specimens

    • Whether it’s SAMD may depend on:

      • Whether it analyzes raw instrument signals vs post-processed numeric results.

      • How tightly it is integrated into lab decision-making.

    • If it processes signals directly, more likely SAMD; if it just flags unusual result patterns from numeric outputs, there may be more room to argue CDS or lower-risk use—but it depends on the full context.

  7. Future of AI/ML regulation

    • Christine: Expect more guidance and eventually more detailed regulations.

    • Current administration is very focused on AI/ML; FDA has already released many guidance documents and is working continuously to adjust as technology evolves.

    • Regulations must still operate within the existing device framework, but we may see sub-frameworks or clarifying regulations specifically for AI/ML.


12. Final Takeaways

  • LDTs are not devices and cannot be directly regulated as such—but:

    • Components (ASRs, RUOs, collection kits) and software can be.

  • For lab-developed software and AI used with LDTs:

    • The line between “just part of an LDT” and “its own regulated device” is case-by-case and gray.

    • The intended use statement is the critical anchor for risk classification and regulatory exposure.

    • Labs should act as if their software might be regulated and build strong, proactive documentation, validation, and change-control frameworks.

  • FDA is actively evolving its guidance and TPLC approach to cope with AI/ML, focusing on transparency, bias, data drift, cybersecurity, and integration risks.

If you’d like, I can next turn this into a one-page executive summary or a slide outline tailored for a lab leadership or compliance audience.

#

#