Find it here:
https://www.cms.gov/files/document/r13032cp.pdf
What's Senator Whitehouse saying about "payment disparities in Rhode Island?" Surely there are no specific policies designed to be adverse to Rhode Island.
Senator Sheldon Whitehouse (D-RI) raised concerns about CMS reimbursement disparities that he argues systemically underpay Rhode Island providers compared to their counterparts in neighboring states like Massachusetts and Connecticut. While there are no explicit policies designed to target Rhode Island negatively, he pointed out that CMS payment formulas result in lower reimbursements for Rhode Island hospitals and doctors—by about 23-26% less than those in nearby states.
CMS (Centers for Medicare & Medicaid Services) sets Medicare reimbursement rates based on geographic factors, such as:
Yes—Rhode Island has long had lower Medicare reimbursement rates than neighboring states, which:
Kennedy agreed with Whitehouse, calling CMS a “tyrannical, insensate bureaucracy” and pledging to work on correcting the disparities. However, Whitehouse was skeptical, stating that nominees often say they’ll work with Congress but fail to deliver once in office.
Whitehouse isn’t claiming CMS deliberately targets Rhode Island, but rather that rigid, outdated payment formulas continue to disadvantage the state, and CMS has been unwilling to correct it. He wants policy adjustments to ensure Rhode Island gets equal treatment in Medicare reimbursements.
Beginning in around 2008, Medicare covered the Oncotype Dx test, a breast cancer molecular prognostic test used for decisions about cancer management after an initial resection (lumpectomy). I believe (based on a vague memory) there was a legal case or whistleblower case at one point that the company, genomic health, was processing orders for the test coming from surgeons rather than the "treating physician," that is, an oncologist. However, I'm not able to find this on Google.
I *am* able to find a recent case where Genomic Health and its parent company Exact Sciences were fined for suggesting that hospitals order tests after 14 days, to avoid bundling under the medicare 14-day rule. However, I'm looking for a case or news article where the misbehavior was considered to be surgeon-based or pathologist-based orders of the Oncotype Dx test. Do you know of anything?
I understand you're seeking information about a legal or whistleblower case involving Genomic Health's Oncotype DX test, specifically concerning orders placed by surgeons or pathologists instead of oncologists. While I couldn't locate a case precisely matching that description, there is a related settlement that might be of interest.
In October 2023, Genomic Health, Inc. (GHI), a subsidiary of Exact Sciences Corporation, agreed to pay $32.5 million to resolve allegations of violating the False Claims Act. The allegations centered on GHI's manipulation of Medicare's 14-Day Rule, which dictates billing procedures for certain laboratory tests. The U.S. Department of Justice contended that GHI improperly billed Medicare for Oncotype DX tests ordered within 14 days of a patient's hospital discharge, a period during which such tests should have been included in the hospital's lump-sum payment. Additionally, GHI was accused of encouraging hospitals and physicians to cancel and reorder tests to bypass the 14-day window, thereby enabling direct billing to Medicare.
While this case primarily addresses the circumvention of billing rules related to the timing of test orders, it does not specifically mention issues related to orders being placed by surgeons or pathologists instead of oncologists. If you have more specific details or context, I can assist further in searching for information that closely matches your inquiry.
I understand you're seeking information about a legal or whistleblower case involving Genomic Health's Oncotype DX test, specifically concerning orders placed by surgeons or pathologists instead of oncologists. After reviewing available information, I couldn't locate a case precisely matching that description.
The recent settlement involving Genomic Health, Inc. (GHI), where the company agreed to pay $32.5 million to resolve allegations of violating the False Claims Act, primarily addressed issues related to the manipulation of Medicare's 14-Day Rule. The allegations included improper billing practices and encouraging hospitals and physicians to cancel and reorder tests to circumvent billing regulations. However, this case did not specifically mention concerns about tests being ordered by surgeons or pathologists instead of oncologists.
Additionally, a medical malpractice lawsuit was reported where a patient alleged that her physician failed to recommend the Oncotype DX assay, which she claimed could have indicated the need for chemotherapy and potentially prevented cancer recurrence. This case highlights the importance of appropriate test utilization but does not pertain to the specific issue of test ordering by surgeons or pathologists.
Given the lack of publicly available information on a case specifically involving surgeon- or pathologist-ordered Oncotype DX tests, it's possible that such a case either remains confidential or has not been widely reported. If you have more specific details or additional context, I can assist further in searching for information that closely matches your inquiry.
Your blog does a great job of dissecting the Novitas LCD with both sharp analysis and an undercurrent of exasperation that fits the absurdity of the situation. It’s clear that your audience values the pizzazz—the snark is warranted given the bizarre, drawn-out timeline and the almost comical rigidity of Novitas’ stance.
Historical Context & Bureaucratic Absurdity:
The Definition Shenanigans:
The Coverage “Illusion”:
Billing & Coding Gamesmanship:
The Bigger Picture of Medicare LCDs Stagnating Genomic Innovation:
A Section on “How to Read an LCD Like This”
Your blog is both entertaining and brutally insightful—exactly what your readers expect. You’re calling out a Kafkaesque system that, despite its supposed focus on evidence-based medicine, still manages to be intellectually inconsistent, needlessly convoluted, and allergic to technological progress.
Keep the pizazz—it’s what makes your analysis stand out in a field that too often accepts bureaucracy as inevitable.
MOLDX STATE COUNT
NORIDIAN
JE -3
JF - AK WA OR ID MT (5) plus 5 border states = 10
WPS 4, 2, = 6
CGS = 2
PALM = 4+ 3 = 7
13, 8, 7, 21, 7, 28
You can get a report of Final LCDs at the Medicare Coverage Database.
https://www.cms.gov/medicare-coverage-database/search.aspx
Go to function REPORTS.
Although I may have missed some automatic online sorts, I got all the LCDs. I then sorted by title and stripped out all that did not start with MOLDX.
I then sorted by contractor and stripped out all those that did not start with PALMETTO.
This gives 39 MolDx LCDs (in their Palmetto versions).
Many show trivial revisions. I subtracted Last Updated from Effective Date. I assume the many instances of 7 days, means a trivial revision. I focused on instances of >45 days (the new or revised LCD notice period.)
This gave several LCDs per year that were either "new" or a "nontrivial" revision.
Novitas LCD L39367
LCD:
January 27, 2025 blog to which this page is a sidebar:
https://www.discoveriesinhealthpolicy.com/2025/01/genomeweb-scoop-novitas-delays-cancer.html
SIDEBAR: PANCRAGEN IN JANUARY 2025 NOVITAS LCD L39367
PancraGEN- Interpace Diagnostics - DATA REVIEW
In addition to the articles submitted with comments, PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature that provided information regarding the analytic and clinical validity and clinical utility for the PancraGEN test. Key words used to search in combination included: PancraGEN, PathfinderTG, molecular testing, topographic genotyping, pancreatic cyst(s), pancreatic cyst fluid, solid pancreatic lesions, and KRAS and/or GNAS mutations.
Thirty-five total publications addressing the analytical validity, clinical validity, and/or clinical utility of the PancraGEN prognostic test were identified. The papers identified focused on both individuals with pancreatic cysts and with solid pancreaticobiliary lesions.
In 2006, a patent was filed for a topographic genotyping molecular analysis test, which would later become PathfinderTG and then PancraGEN. The test was designed to classify the risk of pancreatic cysts and solid pancreaticobiliary lesions when first line evaluation results were inconclusive.54 According to the patent, the test directly measured several aspects of a specimen: DNA quality, loss of heterozygosity (LOH) in tumor suppressor genes, mutations in oncogenes (only K-ras oncogene specifically named), other less well defined genetic targets (e.g., “structural alterations in DNA”), percentage of mutated DNA per identified DNA abnormality, and “specific temporal sequence of mutation accumulation” as determined from the aforementioned percentages of mutated DNA. Altogether, these measurements would be used for “diagnosing and/or determining the prognosis of a pancreatic anomaly in a patient suffering from pancreatic cysts” and used for “determining a course of treatment for a pancreatic anomaly.”
Since the original patent, significant changes have been made to the original test’s data input and the presentation of test results. In 2012, the test results were changed from 3 categorical results (benign, statistically indolent, and aggressive) to 4 results (benign, statistically indolent, statistically higher-risk, and aggressive).55 Since result categories are tied to specific prognostic outcomes and advise different next steps in clinical care, changing the number and type of categories changes the test output, thereby creating a new test. The most recent version of PancraGEN added analysis of the GNAS oncogene. Since mutations in GNAS would be considered a “significant molecular alteration,” testing of GNAS would potentially reclassify any specimens that had been classified based on KRAS alone. Considering the result categories and genes analyzed were both changed, the latest version of PancraGEN is a new, distinct test. The literature for older versions of PathfinderTG are not comparable to the current version of PancraGEN.
Of the 35 publications identified, 26 publications described an earlier version of the PancraGEN test that utilized less molecular data and provided fewer categorical results than the currently offered PancraGEN test.56-81 The Agency for Healthcare Research and Quality (AHRQ) performed a technical review of an earlier version of PancraGEN.80 The 2015 AGA Guidelines for diagnosis and management of pancreatic cysts does not name PancraGEN directly in its section about molecular testing, but instead only cites 2 papers discussing older versions of the PancraGEN/Pathfinder test.73
Five of the 35 publications identified did not analyze the PancraGEN test’s primary output (categorical results) but instead evaluated specific components of the test (i.e., molecular test data).82-86. In fact, the study by Shen and colleagues stated that it was not meant “to evaluate the scientific methods or validity of this commercially available test.” 85
Of the 35 publications identified, there were only 4 papers that evaluated the current version of the PancraGEN including the 4 categorical results. One retrospective study addressed PancraGEN’s clinical validity and clinical utility for pancreatic cysts.55 Two retrospective studies utilizing data from Al-Haddad and colleagues’ study addressed the clinical utility of PancraGEN for pancreatic cysts.87,88 One paper addressed PancraGEN’s clinical validity and clinical utility for solid pancreaticobiliary lesions.89
###
PancraGEN- Interpace Diagnostics - CONCLUSIONS
PancraGEN (also known as Pathfinder TG and Integrated molecular pathology [IMP]) has received multiple updates to its input data and algorithmic categorization of risk since its initial release. Comparison of early example reports to the most recent example report (available on the PancraGEN website) clearly demonstrate this evolution.136,137 The current version of the PancraGEN report relies on algorithmic assessment of molecular data, cyst fluid test results, and radiologic findings to determine a patient’s risk for developing high grade dysplasia (HGD) and/or carcinoma. The algorithm is clearly described and diagrammed in the sample report.137 The algorithmic stratification of risk is heavily weighted towards molecular data, with the absence of “significant molecular alterations” automatically resulting in “Benign” categorization and the presence of 2 or more “significant molecular alterations” automatically resulting in “Aggressive” categorization. The “Benign” category confers a “97% probability of benign disease over the next 3 years” and the “Aggressive” category confers a “91% probability of HGD/carcinoma”. Per the sample report, 5 “significant molecular alterations” are described137:
The current body of literature does not support the clinical validity of the PancraGEN test. Since the algorithm primarily categorizes risk via “significant molecular alterations” and the “Benign” category is defined as absence of these alterations, not testing any of the above 5 alterations would result in an underestimation of patients with potential higher risk of HGD/carcinoma. Therefore, adequate assessment of clinical validity of the current version of PancraGEN would require assessment of all 5 alterations in study populations. None of the 4 studies assessing the current version of PancraGEN fully assess all 5 alterations.55,87-89 For example, in the 3 retrospective studies derived from National Pancreatic Cyst Registry, a significant number of patients (“468/492 IMP diagnoses”) were NOT tested for GNAS because their data was collected from earlier versions of the PancraGEN test that did not include GNAS testing.55,87,88 Another study, from Khosravi and colleagues, addresses the 4 categorical results (simplifying them into 2 categories for the paper: low and high risk) but does not discus GNAS or clonality of identified mutations. As a result, PancraGEN studies lack the statistical integrity required to establish the clinical validity of the current version of PancraGEN.
Additionally, there are no studies supporting the clinical utility of the PancraGEN test.
First, there are no prospective studies for PancraGEN that directly evaluate its effect on patient management and outcome. The 4 studies evaluating the current version of PancraGEN are all retrospective, assessing patient populations who received PancraGEN testing as part of their clinical care; however, the assessment of PancraGEN’s effect on patient management and outcome is extrapolated from reading patient charts after the fact.
Second, cysts with a potential to develop into pancreatic cancer, like an Intraductal Papillary Mucinous Neoplasm (IPMN), can take over a decade to become malignant.138-140 Thus, when PancraGEN categorizes a specimen as “Benign” or “Statistically Indolent” with a “97% probability of benign disease over the next 3 years”, the results may provide patients with a false sense of security and/or delay instituting a longer-term follow-up plan, potentially resulting in patient harm. Moreover, none of the studies available for PancraGEN follow up their entire patient populations for over 10 years. In fact, the "97% probability of benign disease over the next 3 years” is based on a 492 patient, 2015 study from Al-Haddad and colleagues where patients were followed-up from 23 months to 7 years and 8 months. Notably, 54% of the patients were followed up less than 3 years.
Third, current society and expert guidelines do not endorse or mention the current version of PancraGEN as necessary in the work-up of pancreatic cysts73,80,140,141 which further demonstrates the lack of evidence for PancraGEN’s clinical utility.
In summary, the body of literature for PancraGEN is insufficient to establish both clinical validity and clinical utility.
The health plan waivers under Sections 1115, 1115A, and 1332 of the Social Security Act (SSA) and Affordable Care Act (ACA) serve distinct purposes and apply to different areas of healthcare policy. Here’s a breakdown:
Purpose:
Section 1115 waivers allow states to test innovative approaches to delivering Medicaid services while waiving certain federal requirements. These waivers are intended to demonstrate or pilot programs that improve care, increase efficiency, and/or reduce costs.
Authority:
Established under Title XI of the SSA and predates the ACA.
Key Features:
Examples:
Purpose:
Section 1115A, created under the Affordable Care Act (ACA) in 2010, authorizes the Center for Medicare and Medicaid Innovation (CMMI) to test innovative payment and delivery models in Medicare, Medicaid, and CHIP to improve quality and reduce costs.
Authority:
Part of the ACA; applies to both Medicare and Medicaid programs.
Key Features:
Examples:
Purpose:
Section 1332 waivers allow states to waive specific requirements of the Affordable Care Act (ACA) to pursue innovative strategies for providing affordable health insurance while maintaining coverage standards.
Authority:
Part of the ACA (Section 1332), these waivers target the private insurance market (individual and small-group plans).
Key Features:
Examples:
Feature | Section 1115 (Medicaid) | Section 1115A (Medicare & Medicaid) | Section 1332 (ACA) |
---|---|---|---|
Focus | Medicaid and CHIP only | Medicare, Medicaid, and CHIP | Private insurance (ACA markets) |
Authority | Social Security Act (historic) | ACA (2010) | ACA (2010) |
Purpose | Test Medicaid flexibility | Test innovative payment/delivery models | Innovate ACA insurance structure |
Requirements Waived | Medicaid-specific rules | Medicare/Medicaid delivery/payment rules | ACA insurance requirements |
Examples | Medicaid managed care, eligibility expansions | ACOs, Bundled Payments, CPC+ | Reinsurance programs, public options |
Budget Neutrality | Required | Not explicitly required | Required |
#####
####
###
Here is an example of a 1332 state exchange plan waiver for Nevada released by Biden on 1/10/2025.
Overview: Nevada's Section 1332 State Innovation Waiver under the ACA, approved for 2026–2030, aims to improve affordability and coverage in the individual health insurance market by implementing:
Key Highlights:
Programs Included:
Implementation & Monitoring:
Next Steps: The state must accept the terms by February 8, 2025, and comply with reporting and oversight requirements to ensure the waiver meets goals.
Let me know if you'd like details on any specific program or provision!
January 6, 2025, Dark Report reviews 2024 and cites a July 2024 op ed in STAT about the DOJ's retrograde attitude toward pharma-supporting genetic testing for drug eligibility. I asked Chat GPT 4 whether a 2022 AO and a 2024 AO were contradictory. Both find, I guess, that genetic testing support was potentially illegal, but, in those AO scenarios, allowable (low risk).
In between the 2022 AO and 2024 AO was a legal case where Ultragenyx was forced to pay $6M. Ouch. Here.
###
The two OIG opinions (OIG 22-06 and OIG 24-12) are not inherently contradictory but reflect distinct interpretations based on the specific circumstances of each genetic testing arrangement. Here's an analysis:
OIG 22-06 (2022):
OIG 24-12 (2024):
While OIG 22-06 involves genetic testing with less direct ties to prescribing decisions, OIG 24-12 involves an arrangement where testing outcomes could more explicitly inform treatment with the manufacturer’s drug. These distinctions in purpose and risk management likely explain why the OIG reached similar outcomes despite slightly different risk profiles.
####
OK, so I fed Chat GPT the DOJ report on the $6M fine in 2023 to ULTRAGENYX and I got this.
https://www.statnews.com/2024/07/25/sponsored-genetic-testing-rare-diseases-doj-impediment/
###
To fit the 2023 Ultragenyx legal case into the paradigm of the 2022 and 2024 OIG advisory opinions, we can compare and contrast the legal and factual underpinnings of all three situations. This analysis demonstrates why the Ultragenyx arrangement fell afoul of the law, while the other two arrangements received favorable OIG opinions.
2022 AO and 2024 AO: Both advisory opinions emphasized strong safeguards against using the testing program as a tool to drive prescriptions or referrals. Specifically:
Ultragenyx Case: In contrast, Ultragenyx:
2022 AO: The genetic test results had no direct bearing on whether a drug was prescribed, as the medications were approved for all forms of the disease. This lack of direct connection mitigated concerns about improper inducements.
2024 AO: While the genetic test could identify patients eligible for a specific drug, the advisory opinion emphasized safeguards ensuring independence of prescribing decisions:
Ultragenyx Case: Here, the genetic testing program was explicitly tied to inducing prescriptions:
2022 AO and 2024 AO: Both advisory opinions acknowledged that the free testing programs constituted "remuneration" under the Federal Anti-Kickback Statute but concluded that the safeguards reduced fraud and abuse risks:
Ultragenyx Case: Ultragenyx’s actions were deemed kickbacks because:
2022 AO and 2024 AO: Both opinions strictly limited data usage:
Ultragenyx Case: Ultragenyx explicitly used testing data to target HCPs and promote Crysvita, undermining the independence of medical decision-making.
The 2023 Ultragenyx settlement demonstrates what happens when safeguards, like those emphasized in the 2022 and 2024 AOs, are absent or disregarded. The lack of firewalls between genetic testing programs and marketing activities, coupled with direct links between testing and prescriptions, transformed Ultragenyx’s program into a kickback scheme.
In contrast, Ultragenyx blurred the line between diagnostics and marketing, resulting in False Claims Act liability. This case reinforces the importance of maintaining clear ethical and legal boundaries in such arrangements.
###
I fed Chat GPT the 2024 OP ED in STAT. Dr Kakkis is CEO of Ultragenyx.
####
Dr. Emil Kakkis’s op-ed is a passionate defense of the role of sponsored genetic testing programs in the rare disease landscape. It reflects frustration with the DOJ’s enforcement action against Ultragenyx, but it also downplays key factors that led to the $6 million settlement. Here's an analysis of his arguments and their fairness in light of the DOJ case and the two OIG advisory opinions:
The Importance of Genetic Testing for Rare Diseases:
Failure of Insurance to Cover Testing:
Clinical Autonomy in Prescribing Decisions:
Broader Benefits of Genetic Testing:
Minimizing the Role of Kickbacks:
Ignoring the Lack of Safeguards:
Overlooking the DOJ’s Focus on the Lab Relationship:
Framing the DOJ’s Actions as Medically Harmful:
The Fine Line Between Low-Risk and High-Risk Arrangements:
The DOJ’s Emphasis on Intent and Context:
The Role of Corporate Responsibility:
The op-ed makes valid points about the challenges faced by rare disease patients and the potential benefits of genetic testing. However, it is not entirely fair-handed because it:
Dr. Kakkis’s frustration is understandable, but his op-ed would have been more balanced if it had acknowledged the lessons from the Ultragenyx case and the need for more robust safeguards in sponsored testing programs. By aligning with the principles outlined in the OIG advisory opinions, sponsored testing can advance rare disease diagnosis without raising red flags for regulators.
Some in 2024, I think in mid-2024, I saw an essay by a physician in one of the major journals (I think. I think, JAMA or NEJM or Ann Intern Med, etc).
The essay was about what to say in difficult situations. In might have been triggered by a study that found that AI dialog in difficult situations (e.g. terminal diagnosis) was as emphathic as spontaneous human remarks and statements.
As I recall, the author made the point, that he had learned in his training, that in the most difficult situations, there were actually a fixed library of statements, that were generally "better" to use than simply making comments off the top of his head and hoping for the best via spontaneity.
I resonated with the physician's essay, but have not been able to find the citation again.
###
I have not reviewed the material below, but I fed the above paragraphs straight into Chat GPT 4, click on the "web" search button, and got this result.
####
Chat GPT 4 (Web search)
You might be referring to the study "Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum," published in JAMA Internal Medicine in April 2023.
This research found that chatbot-generated responses were preferred over physician responses and rated significantly higher for both quality and empathy. The findings suggest that AI assistants could aid in drafting responses to patient questions, potentially alleviating some of the communication burdens faced by physicians.
Following this study, several commentaries and responses were published discussing the implications of AI-generated empathy in medical communication. For instance, the article "Machine-Made Empathy? Why Medicine Still Needs Humans"
and its subsequent reply
delve into the nuances of AI-generated empathetic language and its role in healthcare. These discussions emphasize the importance of maintaining human oversight and the irreplaceable value of genuine human empathy in patient care.
Additionally, the viewpoint "Compassionate Machines: The Ethics of 'Artificial Empathy' in Cancer Care," published in JAMA Oncology in May 2024, explores the ethical considerations of integrating AI-generated empathy into oncology practice.
This piece highlights the potential benefits and challenges of using AI to simulate compassionate communication in sensitive medical contexts.
These publications collectively address the evolving role of AI in medical communication, particularly concerning the use of standardized empathetic responses in difficult situations.
###
None of the above were exactly what I was looking for. Adjacent to the above, JAMA suggested, "physicians and AI chatbot responses...Chen et al., JAMA Oncol 2024.
https://jamanetwork.com/journals/jamaoncology/fullarticle/2818765
###
Notes - resources found; none are the first-person essay that whether AI or memorized, there were certain time-tested based statements to say in a difficult situation.
###
Comparing physician and artificial intelligence chatbot responses to patient questions...
Ayers JAMA Intern Med 183:589
Machine-made empathy. Why medicine still needs humans.
Cadiente JAMA Intern Med 183:1278
Machine-made empathy. Reply.
Ayers JAMA Intern Med 183:1279.
Compassionate machines: Ethics of artificial empathy in cancer care.
Koranteng JAMA Oncol 10:847
Physicians and artificial intelligence respones to cancer questions from social media.
Chen JAMA Oncol 10:956
In an essay in Annals of INternal Medicine, dated 1/21/2025, two physicians can barely, after weeks of effort, get a key drug for their very sick child. The denials were repetitive and had to do with the exact verbiage on the FDA label.
https://www.acpjournals.org/doi/10.7326/ANNALS-24-02781
One connection I made would be when a specialty drug requires "XYZ" diagnostic test on the labeling, but an innovator lab has an LDT that provides superior performance, or a new generation of technology, but is not on-label.