Tuesday, April 14, 2026

Appendix S: Comparing March 2026 with April 14 2026

Bottom line: Option B looks less like a new theory of Appendix S and more like a repositioning exercise. The April 14 version is trying to make Appendix S function as a practical CPT drafting framework for code descriptors, centered on software outputs, their role in clinical care, and what evidence is needed to justify using Appendix S terminology. It also seems to be trying to protect a broader “augmentative” lane so that not every highly automated algorithm gets pushed into “autonomous.”

First, the document’s purpose shifts. Option A is framed as a taxonomy for classifying AI applications in medical services and procedures. Option B opens by saying the taxonomy provides guidance in determining the appropriate terminology for CPT code descriptors, and only then speaks of classification. That is a meaningful repositioning. They are trying to make Appendix S sound less like a conceptual essay about AI and more like a coding governance tool that CPT can actually use when writing or vetting descriptors. Option B also adds that taxonomic classification is determined by the software’s output and the role of its output in clinical care, which is a more operational coding lens.

Second, Option B is trying to anchor the taxonomy in the software output, not in the brand-name aura of “AI.” Both versions say “AI” itself is not enough, but B leans harder into the notion that what matters is the output, what that output does clinically, and whether the output is clinically meaningful. That is important because it gives CPT a cleaner way to evaluate software-based services: not “is this AI?” but “what is the output, what role does it play, and what evidence supports that role?”

Third, B noticeably strengthens the case for “assistive” tools. Option A says assistive outputs improve physician performance and may be supported by technical or analytical validation if the primary service output is unchanged. Option B keeps that, but adds more explicit language about improving accuracy, precision, and inter-observer variability, while recognizing that the primary service output may be unchanged. That reads like an effort to reassure CPT that assistive software can still be valuable and code-descriptively recognizable even when it does not create a new clinical endpoint or a new physician-level conclusion. In other words, they are trying to preserve space for software that helps the doctor see or perform better, without forcing it to prove full downstream clinical utility every time.

Fourth, the biggest substantive move is in “augmentative.” Option A describes augmentative outputs as clinical scales, indexes, categorical classifications, or other metrics in common clinical use, or alternatively outputs validated for impact on patient care. Option B expands and clarifies this. It explicitly includes risk scores, says outputs may be used in diagnosis, cure, mitigation, treatment, or prevention, and creates two clearer validation paths: either the output is validated by equivalence to a metric already in clinical use, or, if it is novel, it may be validated by impact on patient care. That is a major tell. They are trying to make augmentative broad enough to include modern predictive/prognostic software outputs without requiring every such output to look like an old-fashioned common-use metric.

Related to that, B is trying to tie augmentative outputs more tightly to actual care decisions. Option A says augmentative output must be clinically validated, important, and pertinent to the code. Option B adds that the output contributes to patient management. That phrase matters. It suggests they are trying to move the taxonomy away from abstract statistical novelty and toward a more CPT-friendly question: does this output matter enough to patient care to justify descriptor language and possibly code recognition? That is more practical, and probably more defensible at CPT.

Fifth, Option B appears to be protecting automated-but-not-fully-autonomous software from being shoved into the autonomous bucket. This may be the most strategic sentence added in B: the note explaining that “automated” or “automatically” refers to the algorithm’s work in getting from input to output, and that automatic derivation of parameters can apply to either augmentative or autonomous. That is a very important clarification. It suggests they now want CPT to distinguish between software that is automated in computation and software that is autonomous in clinical conclusion or action. That protects many software tools that run automatically but still require physician interpretation from being treated as autonomous.

Sixth, B refines the autonomous section mainly by clarification, not by changing the theory. The three levels remain intact, but B adds explanatory language around “automatically,” adds “explainability,” slightly sharpens the Level II override language, and adds a sentence in Level III about continuing physician oversight over multiple interventions to assess whether management goals are being achieved. That feels like a response to the discomfort that fully automatic clinical action can provoke. They are not abandoning autonomy; they are trying to make it look more governable, reviewable, and safer within CPT’s framework.

Seventh, B seems to be trying to show CPT that this is not hypothetical. The April version adds an “Examples in CPT code set” row to the summary table and explicitly labels retinal imaging 92229 as Level I autonomous. That is a sign they are trying to root the framework in existing CPT precedent, not merely future possibilities. It says, in effect, “this taxonomy already maps onto real codes.” That is often what people do when trying to get editorial-panel buy-in.

So what are they trying to do now? I think the answer is: they are trying to make Appendix S more acceptable to CPT by turning it into a descriptor-writing and classification tool, broadening and protecting the augmentative category, preserving room for assistive tools with primarily analytical evidence, and preventing the word automatic from collapsing too many products into autonomous. Put bluntly, Option B is trying to make Appendix S more usable, more flexible, and less likely to over-classify software outputs into the most controversial bucket.

One caveat: Option B is textually messy in several places, with duplicated phrases and obvious edit collisions. So not every wording change should be over-read as intentional policy. But the directional changes are consistent enough that the overall strategy is pretty clear.

If useful, I can next turn this into a one-page “A to B delta memo” in your style, with headings like Purpose shift / Assistive shift / Augmentative shift / Autonomous clarification / Strategic implication for April 30.


###

While I am only now studying Appendix S, I have had about 4 concerns. 

First, the version is now heaviy, heavily edited, with layers of edits on top of each other, and is nearly unreadable. They should, at a minimum, provide a clearn version of APpendix S, Ballott B, April 14, 2026. // 

Second, since the document is so vastly edited, it may be better to simply produce a new document that is fit for purpose. for example, the US Articles of Confederation were dumped and we started over with the US Constitution, we did not simply redline 98% of the Articles. // 

Third, I've been unclear whether the three-category system, all beginning with "A", is really sound or not. Assistive, Augmentive, Autonomous. This is not a natural word triplet (like big, bigger, biggest, or like high school, college, grad school.) I believe there were pre conceived intentions, and Assistive Augmentive Autnonomous were brought in, but then attempts to define each group merged the intentions (which existed before the word choices) and the meaning of the words chosen (and the triple of A words is more than a little suspicious). for example, perhaps they should be Group 1, Group 2, Group 3, which doesn't bring any legacy semantic baggage, or worse, varying definitions for each term A,A,A. /// 

Fourth, beta testing should be crucial. Software is extensively tested with many inputs trying to break it, before release. There are some "examples" but between the massive redlining and the density of the document, it's hard to tell that we are really putting the careful, focused attention into beta testing many examples. For example, a real beta test would be to take 10 services or 20, and see how 3 experts categorize them, and get things like kappa scores plus elucidate paradoxes or confusions. /// How do my 4 concerns fit with what's going on here in the confusing multi versions of Appendix S.


###

Your four concerns fit extremely well with what is going on here. I would say they are not peripheral complaints; they are almost a diagnostic readout of the current state of Appendix S. What you are seeing is a document that is trying to do too many things at once: define a taxonomy, guide CPT descriptor drafting, set evidence expectations, preserve room for current products, and avoid boxing too many products into the most controversial category, autonomous. When a document is under that much conceptual strain, the symptoms are exactly the ones you identify: unreadable editing, drift in purpose, unstable terminology, and weak real-world testing.

On your first concern, the heavy editing is not just ugly; it actively blocks comprehension. In Option B, there are obvious layered edits, duplicated phrases, collisions of alternative wording, and sentences that still show remnants of internal negotiation. That matters because some of the most important changes are subtle: B newly says Appendix S helps determine terminology for CPT code descriptors, newly emphasizes the software output and its role in clinical care, broadens augmentative outputs to include risk scores and metrics used in diagnosis or treatment, and adds the clarifying note that “automatic” derivation can apply to either augmentative or autonomous. Those are substantive policy moves, but the editing clutter makes it hard to tell which changes are deliberate and which are debris. So your request for a clean copy of Ballot B is not a matter of aesthetics; it is a precondition for meaningful review.

Your second concern goes even deeper: at some point a redline becomes the wrong instrument. I think you are quite right that this may now call for a fresh document fit for purpose rather than endless accretive editing. Option B is no longer merely polishing Option A. It is shifting the apparent mission of Appendix S from a general taxonomy to a practical framework for descriptor drafting and category assignment; it is also reshaping the validation logic for augmentative outputs and clarifying the automation/autonomy boundary. Once the architecture is moving, not just the wording, a redline can become misleading because it suggests continuity where there may actually be redesign. Your Articles of Confederation versus Constitution analogy is apt: once the underlying theory of the document is evolving, clarity may require a rewrite rather than a 98% markup.

Your third concern about the “three A’s” may be the sharpest conceptual point of the lot. I agree that assistive, augmentative, autonomous does not feel like a naturally ordered triplet. It feels more like a crafted branding set than an organically stable taxonomy. And the text itself shows signs of the instability you are sensing. For example, in B, assistive still includes outputs that may use terms like “likelihood of,” “suggestive of,” or “risk for,” which already sounds more analytically rich than a plain attention-directing tool. Meanwhile, augmentative is expanded to include risk scores and novel metrics that contribute to patient management, and B then adds the crucial note that automatic derivation can apply to either augmentative or autonomous. That is revealing. It suggests the real conceptual battle is not over three elegant words; it is over multiple axes that the three words imperfectly compress: detection vs parameter generation vs conclusion, physician interpretation vs machine action, automation vs autonomy, and analytical validity vs clinical utility. In that sense, your suspicion is justified: the words may have come first, but the actual categories are still being negotiated underneath them. A neutral framework like Group 1, 2, 3 or a matrix with named axes might indeed be intellectually cleaner.

Your fourth concern about beta testing is also exactly on target. The document does contain examples, and Option B strengthens that by explicitly listing examples in the CPT code set, such as algorithmic ECG risk assessment, FFR, and retinal imaging. But that is not the same thing as true stress-testing. A real taxonomy test would do what you suggest: take 10 or 20 borderline services, have multiple knowledgeable reviewers classify them independently, and then examine where they disagree, why they disagree, and whether the category system is producing reproducible results. Nothing in these drafts shows that kind of disciplined field testing. The addition of examples feels more like illustration for persuasion than validation of the taxonomy’s reliability. So your instinct about kappa scores, paradox cases, and deliberate edge-case review is exactly the kind of maturity this framework would need before being treated as settled doctrine.

So how do your concerns fit the multi-version confusion? They fit because the drafts appear to be the product of a live policy negotiation among at least several competing aims. One aim is to make Appendix S more usable for CPT descriptor writing. Another is to keep a broad and defensible lane for augmentative software so modern predictive and risk outputs are not forced into autonomous. Another is to preserve room for assistive tools whose value may be analytical rather than outcome-transforming. And another is to define autonomous in a way that sounds governable, especially by distinguishing “automatic computation” from “autonomous clinical action.” Those are real policy goals, but they are being pursued inside a taxonomy whose wording may not be robust enough to carry them cleanly. The result is exactly what you are reacting to: a text that reads less like finished doctrine and more like a negotiation artifact under strain.

My own synthesis would be this: Concern 1 says the current draft is not readable enough to judge fairly. Concern 2 says the current draft may be too conceptually changed to justify continued patching. Concern 3 says the taxonomy itself may be overbranded and under-structured. Concern 4 says the framework has not yet been shown to be reliable when applied to hard cases. That is a very coherent critique. It is not four separate gripes. It is one integrated argument that Appendix S is still in a pre-final, pre-validated, partially unstable stage, even as it is being advanced toward the April 30, 2026 meeting.

A very fair one-sentence verdict would be: the confusing multi-version history of Appendix S is not merely a drafting problem; it is evidence that both the wording and the underlying classification model are still being worked out at the same time.

I can turn this into a crisp meeting memo or comment letter paragraph set in your voice.

###

MEETING MEMO

Meeting Memo

Re: Appendix S for April 30, 2026 CPT meeting

My reading is that the current Appendix S process has reached the point where the drafting problems are no longer merely editorial. They now reflect unresolved conceptual issues. Option B does not simply tidy Option A. It shifts the apparent purpose of the appendix toward a tool for choosing CPT descriptor terminology, places greater emphasis on software outputs and their role in clinical care, broadens the augmentative lane to include items such as risk scores and outputs used in diagnosis or treatment, and expressly states that automatic derivation can apply to either augmentative or autonomous. In short, the drafters are still refining the architecture while also trying to finalize the wording.

1. Readability is now a substantive problem, not a cosmetic one. The April 14 ballot text is so layered with edits, collisions, and duplicated phrases that it is difficult to tell which changes are intentional policy choices and which are simply drafting residue. That is especially problematic because the new text appears to make meaningful moves: it newly frames Appendix S as guidance for CPT code descriptors; it adds language about the software output’s role in clinical care; and it materially revises the discussion of assistive and augmentative outputs. At a minimum, the committee should insist on a clean April 14 version before treating Ballot B as ready for serious review.

2. A full rewrite may now be more honest than further redlining. This no longer reads like a stable text undergoing polish. It reads like a document whose mission is still evolving. Option A was framed more simply as a taxonomy for classifying AI applications. Option B is more explicitly a framework for CPT terminology, classification, and evidentiary positioning. Once the theory of the document is moving, not just the wording, endless redlining becomes misleading. It may be cleaner and more transparent to produce a fresh draft that is fit for purpose rather than continuing to patch over a document that is already difficult to read and harder still to interpret.

3. The three “A” terms may be carrying too much semantic baggage. Assistive, augmentative, and autonomous are memorable, but they do not form a naturally self-explanatory progression. More importantly, the April draft suggests that the real conceptual work is occurring on several different axes at once: detection versus parameter generation versus conclusion; physician interpretation versus machine action; and automation versus autonomy. The new statement that “automatic” derivation can apply to either augmentative or autonomous is especially revealing, because it shows that the taxonomy is trying to separate at least two different concepts that the word choices do not naturally keep apart. If the committee wants the framework to endure, it may need either more neutral labels or a more explicit matrix structure rather than relying on a somewhat forced A-A-A triplet.

4. The framework appears illustrated, but not truly beta-tested. Option B adds examples from the CPT code set and therefore looks more concrete than Option A. But examples are not the same as validation of the taxonomy itself. A serious beta test would take a meaningful sample of services, assign them independently to reviewers, measure agreement, and then analyze the paradoxes and borderline cases. That kind of reliability testing would be particularly important here because the draft is trying to distinguish subtle boundaries such as assistive versus augmentative and augmentative versus autonomous. At present, the examples seem more like persuasive illustrations than evidence that the system has been stress-tested.

Overall conclusion. Your four concerns line up closely with what the document itself shows. The current multi-version confusion is not just a drafting nuisance. It is evidence that the committee is still working out both the text and the taxonomy at the same time. The practical recommendation for the April 30 meeting is straightforward: request a clean Ballot B text, consider whether a rewrite is preferable to further redlining, revisit whether the three labels are the best structure, and ask for more explicit real-world testing of the classification scheme before treating it as settled CPT doctrine.

If you want, I can make this one notch sharper and more pointed, as if it were talking points for you at the microphone.

###

Below are microphone points you can use orally, followed by a written comment draft you can submit with light editing.

Microphone points

  1. My first concern is readability. Ballot B is now so heavily edited that it is difficult to separate substantive policy choices from drafting residue. Before the committee asks people to react to the language, it should circulate a clean April 14 text.

  2. My second concern is whether redlining is still the right tool. This no longer looks like simple polishing. Option B newly presents Appendix S as guidance for choosing CPT descriptor terminology, and it newly emphasizes the software output and its role in clinical care. That looks more like a partial redesign than an edit.

  3. Third, I am not yet convinced the three-category A-A-A framework is conceptually stable. “Assistive, augmentative, autonomous” is memorable, but it is not a naturally self-explanatory progression. The draft seems to be mixing several different axes: detection, parameter generation, interpretation, automation, and management action.

  4. The April draft itself shows this instability. It now says that “automatic” derivation can apply to either augmentative or autonomous, which suggests that automation and autonomy are not the same thing, yet the terminology still risks blurring them.

  5. The augmentative category appears to be expanding in real time. Option B adds risk scores and metrics used in diagnosis, cure, mitigation, treatment, or prevention, and it allows validation either by equivalence to a metric in current use or, if novel, by impact on patient care. That is a meaningful broadening of the middle category.

  6. That may be a reasonable policy direction, but it reinforces the case for a cleaner structure. If the committee is still deciding what belongs in the middle bucket, it may be better to step back and rewrite the framework than continue layering edits onto a redline.

  7. Fourth, examples are not the same as beta testing. Ballot B gives CPT examples such as algorithmic ECG risk assessment, FFR, and retinal imaging, but that is illustration, not validation of the taxonomy.

  8. What is missing is reliability testing. A serious beta test would take a set of real services, have multiple expert reviewers classify them independently, and then study areas of disagreement. Until that is done, I would be cautious about treating this framework as settled.

  9. So my practical recommendation is simple: circulate a clean Ballot B, consider a fresh rewrite if needed, re-examine whether the A-A-A labels are the best structure, and perform more explicit real-world testing before final adoption.


Draft written comments for submission

I appreciate the effort to refine Appendix S, but I am concerned that the current drafting process has reached the point where the editorial issues now reflect deeper conceptual instability. Ballot B is so heavily revised that it is difficult to tell which changes are intended policy choices and which are simply remnants of layered editing. At a minimum, reviewers should be given a clean April 14, 2026 version of Ballot B before being asked to assess its substance.

My second concern is that the document may now need more than further redlining. Option B does not merely polish Option A. It newly presents Appendix S as guidance for determining the appropriate terminology for CPT code descriptors, and it gives added emphasis to the software output and the role of that output in clinical care. Those are not trivial editorial changes. They suggest that the purpose and architecture of Appendix S are still being worked out. When a document is changing at that level, a fresh rewrite may be more transparent and more useful than another round of edits layered onto an already difficult text.

My third concern is with the basic taxonomy itself. I am not yet persuaded that assistive, augmentative, autonomous is the best or most stable three-part framework. The three words are memorable, but they do not form a naturally self-explanatory progression. More importantly, the text appears to be sorting technologies along several different axes at once, including detection of relevant data, derivation of clinically meaningful parameters, generation of interpretations or conclusions, degree of physician involvement, and degree of machine-initiated action. That makes it harder to know whether the categories are truly conceptually distinct or whether they are serving as labels for a more complex matrix that has not yet been fully acknowledged.

The April draft itself illustrates this problem. Option B expressly states that the “automated” or “automatic” derivation of parameters can apply to either augmentative or autonomous. That clarification may be useful, but it also shows that automation and autonomy are not the same concept. If the framework must repeatedly explain that point, it is fair to ask whether the terminology itself is carrying too much semantic baggage and whether a more neutral structure would serve CPT better.

I am also struck by the extent to which the augmentative category appears to be evolving in real time. Compared with Option A, Ballot B broadens augmentative outputs to include risk scores and other metrics that may be used in diagnosis, cure, mitigation, treatment, or prevention of disease. It also states that such outputs may be validated either by equivalence to a metric already in clinical use or, if novel, by impact on patient care. In addition, Ballot B states that augmentative outputs are clinically meaningful if they are clinically validated, important, contribute to patient management, and are pertinent to the code. These are substantial shifts in emphasis. They may be reasonable, but they strengthen the argument that the committee is still refining the substance of the taxonomy, not merely editing its expression.

Finally, I do not believe the current examples are a substitute for meaningful beta testing. Ballot B now includes examples in the CPT code set, including algorithmic electrocardiogram risk-based assessment, noninvasive estimate of coronary fractional flow reserve, and retinal imaging. That is helpful as illustration. However, examples chosen by the drafters do not demonstrate that the taxonomy is reproducible, stable, or easy to apply in borderline cases. A more convincing validation process would test a set of real services, ask multiple knowledgeable reviewers to classify them independently, assess inter-rater agreement, and then analyze the paradoxes and disagreements. That is the kind of stress-testing one would expect before finalizing a framework that may shape future CPT descriptor language and categorization.

For these reasons, I suggest four practical steps. First, circulate a clean April 14 Ballot B. Second, consider whether a fresh draft would now be more fit for purpose than continued redlining. Third, revisit whether the current three-category terminology is the clearest and most durable structure. Fourth, perform more explicit real-world testing of the taxonomy before treating it as settled. In my view, the current multiversion confusion is not just an editorial nuisance. It is evidence that both the wording and the underlying classification model are still being developed at the same time.

I can also turn this into a more formal comment letter with a salutation and sign-off, or into an even tighter 90-second oral script.

###

Another problem is that they channel comments into a standard template which is meant for regular CPT code applications (E.g. the proposed code text, alternative coding, why a new code is needed, technical definitoin of the service, physician work vignette). That is a horrible fit for our comment, which really must be a "letter" not filling in a few short and irrelevant web form boxes.

###

Yes. That is an excellent fifth concern, and it is different in kind from the first four because it is a process defect.

The problem is that the comment mechanism appears to presuppose a standard CPT code application, while your concern is about a taxonomy and governance framework. A form built for questions like proposed code text, alternative coding, technical definition, and physician work vignette is a poor fit for comments on whether Appendix S is readable, conceptually stable, properly structured, or adequately beta-tested. That mismatch does two bad things: it channels reviewers into the wrong frame of analysis, and it shrinks broad policy comments into fragments that do not reflect the real issue. In other words, the submission format itself may bias the feedback toward ordinary code-application issues and away from the higher-level problems that Appendix S raises.

For the microphone, I would add something like this:

5. The comment process itself is a bad fit for this topic. The standard CPT comment template is designed for an ordinary code application: proposed code text, alternative coding, technical definition, physician work, and vignette. Appendix S is not an ordinary code application. It is a cross-cutting taxonomy and policy framework. Comments on readability, conceptual structure, terminology, and validation do not fit naturally into those boxes. For this topic, the committee should allow or affirmatively invite a free-form letter submission rather than forcing reviewers into a template built for something else.

And for the written submission, I would add a paragraph like this:

Comment format and process. I also wish to note a process concern. The standard CPT comment template appears designed for an ordinary code application, with fields such as proposed code text, alternative coding, technical definition of the service, and physician work vignette. That format is not well suited to comments on Appendix S. Appendix S is not simply a request for a new code or a revision of an existing descriptor. It is a broader taxonomy and policy framework that raises questions of readability, conceptual design, terminology, evidentiary thresholds, and practical application. Those concerns cannot be meaningfully addressed in a few short fields intended for a conventional code proposal. For this reason, I believe comments on Appendix S should be accepted and reviewed in the form of a free-standing letter, or at minimum with an option for a substantial narrative attachment.

Here is an even sharper version, if you want it a little punchier:

The current comment portal seems built for routine CPT code applications, not for review of a foundational taxonomy. That is a mismatch. A framework like Appendix S should be reviewed through a letter-style comment process, because the key questions are not “what is the code text” or “what is the physician vignette,” but rather “is the framework readable, conceptually sound, consistently structured, and reliably applicable.”

This actually strengthens your whole presentation, because now the argument has five linked parts:

Text problem: the document is unreadable.
Drafting problem: it may need a rewrite, not another redline.
Taxonomy problem: the A-A-A structure may not be stable.
Validation problem: it has not been adequately beta-tested.
Process problem: the comment mechanism is built for the wrong kind of review.

That is a very coherent critique.

I can now fold this fifth point into the full oral script and the written letter so you have one integrated package.

###

###

###

Absolutely. Here is an integrated package with microphone remarks and a written comment letter.

Microphone script

Thank you. I have five concerns about Appendix S as it now stands for the April 30 meeting.

First, readability. Ballot B is now so heavily edited that it is hard to tell what is a deliberate policy choice and what is simply residue from repeated revisions. Since Option B makes meaningful changes, including framing Appendix S as guidance for CPT descriptor terminology, adding emphasis on the software output and its role in clinical care, and revising the descriptions of the categories, I do not think reviewers can fairly assess the text without a clean April 14 version.

Second, I am not sure further redlining is the right tool. This looks less like polishing and more like partial redesign. For example, the April version broadens the augmentative category to include risk scores and outputs used in diagnosis, treatment, or prevention, and it says that automatic derivation can apply to either augmentative or autonomous. Those are substantive moves. At some point, a fresh document fit for purpose may be clearer than another dense redline.

Third, I am not yet persuaded that the three-category A-A-A framework is stable. “Assistive, augmentative, autonomous” is memorable, but it is not a naturally self-explanatory progression. The draft appears to mix several different axes at once: detection, parameter generation, interpretation, automation, and management action. The text itself shows this strain, especially when it has to clarify that “automatic” does not necessarily mean “autonomous.”

Fourth, examples are not the same as beta testing. The April draft gives examples from the CPT code set, which is helpful, but that is not the same as showing the taxonomy is reliable. A stronger process would take a meaningful set of real services, have multiple experts categorize them independently, and then examine agreement and disagreement. Until that kind of testing is done, I would be cautious about treating this framework as settled.

Fifth, the comment process itself seems to be a poor fit. As I understand it, comments are being funneled into a standard CPT application template built for ordinary code proposals, with boxes such as code text, alternative coding, technical definition, and physician work vignette. That is a bad fit for Appendix S. Appendix S is a cross-cutting taxonomy and policy framework. Comments on readability, structure, terminology, and validation really need to be submitted and reviewed as a letter, not squeezed into a few short and largely irrelevant fields.

So my practical request is straightforward: provide a clean April 14 text, consider whether a rewrite is more appropriate than continued redlining, revisit whether the three-category terminology is the best structure, allow more explicit real-world testing, and accept free-form letter comments for Appendix S rather than forcing them into a routine code-application template.


Draft written comment letter

Re: Appendix S for the April 30, 2026 CPT Editorial Panel meeting

I appreciate the effort that has gone into revising Appendix S. However, I am concerned that the current drafting process has reached the point where the editorial problems now reflect deeper conceptual and process issues.

My first concern is readability. The April 14 Ballot B text is so heavily edited that it is difficult to distinguish substantive policy changes from drafting residue. That matters because the April draft appears to make meaningful moves. It newly presents Appendix S as guidance for determining appropriate terminology for CPT code descriptors. It gives greater emphasis to the software output and the role of that output in clinical care. It also revises the discussion of the assistive, augmentative, and autonomous categories in several important respects. Under these circumstances, I do not believe reviewers can fairly evaluate the proposal without first seeing a clean April 14 version.

My second concern is that the document may now require more than continued redlining. Option B does not merely polish Option A. It broadens the augmentative category, including explicit reference to risk scores and outputs that may be used in diagnosis, cure, mitigation, treatment, or prevention of disease. It also clarifies that the “automatic” derivation of parameters can apply to either augmentative or autonomous outputs. Those are not minor stylistic edits. They suggest that the architecture of the framework is still being worked out. When the theory of a document is still moving, a fresh rewrite may be more transparent and more useful than another dense redline layered on top of prior versions.

My third concern is with the taxonomy itself. I am not yet persuaded that assistive, augmentative, autonomous is the clearest or most stable structure. The three terms are memorable, but they do not form a naturally self-explanatory progression. More importantly, the current draft appears to sort technologies along several different axes at once, including detection of relevant data, derivation of clinically meaningful parameters, generation of interpretations or conclusions, degree of physician involvement, and machine-initiated action. The April text itself illustrates this tension by stating that “automatic” derivation can apply to either augmentative or autonomous outputs. That clarification may be useful, but it also shows that the terminology may be carrying too much semantic baggage. If repeated clarification is needed to separate automation from autonomy, it is fair to ask whether a more neutral or more explicitly structured framework would serve CPT better.

My fourth concern is validation of the framework itself. The April draft includes examples from the CPT code set, which is helpful as illustration. However, examples chosen by the drafters are not the same as meaningful beta testing. A stronger process would test a set of real services, have multiple knowledgeable reviewers classify them independently, assess inter-rater agreement, and analyze the areas of disagreement and paradox. That kind of stress-testing would be especially important here because the draft is drawing fine distinctions between assistive, augmentative, and autonomous services. Until such testing is done, I would be cautious about treating the framework as settled.

My fifth concern is the comment process itself. As I understand it, comments are being funneled into a standard CPT application template designed for ordinary code proposals, with fields such as proposed code text, alternative coding, technical definition of the service, and physician work vignette. That format is a poor fit for comments on Appendix S. Appendix S is not simply a request for a new code or a revision of an existing descriptor. It is a broader taxonomy and governance framework that raises questions of readability, conceptual structure, terminology, evidentiary thresholds, and practical application. Those concerns cannot be meaningfully addressed by squeezing them into a few short fields intended for a routine code application. For this topic, comments should be accepted and reviewed as a free-standing letter, or at minimum with a clear option for a substantial narrative attachment.

For these reasons, I respectfully suggest the following steps before Appendix S is treated as settled. First, circulate a clean April 14 Ballot B. Second, consider whether a fresh rewrite would now be more fit for purpose than continued redlining. Third, revisit whether the current three-category terminology is the clearest and most durable structure. Fourth, conduct more explicit real-world testing of the classification framework. Fifth, permit or affirmatively invite letter-style comments for Appendix S rather than forcing reviewers into a template built for ordinary code proposals.

In my view, the current multi-version confusion is not simply an editorial nuisance. It is evidence that both the wording and the underlying classification model are still being developed at the same time. A cleaner document and a better-fitting comment process would improve the quality of the committee’s review.

Sincerely,

[Your Name]

If you want, I can next make this more diplomatic, more pointed, or short enough to fit a portal character limit.

###

Absolutely. Here is a more diplomatic version, followed by a portal-friendly shortened version in case the form has a tight character limit.

Diplomatic letter version

Re: Appendix S for the April 30, 2026 CPT Editorial Panel meeting

Thank you for the substantial work reflected in the current revisions to Appendix S. The effort to create a usable framework for describing software-enabled medical services and procedures is important and timely. I offer the following comments in the spirit of helping strengthen the clarity and usability of the proposal.

My first observation is that the current April 14 ballot text is quite difficult to review in redline form. Because the draft appears to make meaningful changes, including framing Appendix S as guidance for CPT descriptor terminology, placing greater emphasis on software outputs and their role in clinical care, and refining the descriptions of the categories, a clean version of Ballot B would greatly assist reviewers in understanding the intended policy choices.

Second, the extent of revision suggests that the committee may wish to consider whether continued redlining remains the best drafting method. The April text appears to do more than clarify wording. For example, it broadens the discussion of augmentative outputs to include risk scores and outputs used in diagnosis, treatment, or prevention, and it clarifies that automatic derivation may apply to either augmentative or autonomous outputs. These are meaningful refinements. A fresh clean draft, prepared from the ground up, might now be easier for reviewers to evaluate than another heavily layered markup.

Third, I encourage the committee to continue examining whether the three-category terminology of assistive, augmentative, and autonomous is the clearest long-term structure. The framework is memorable, but the underlying distinctions appear to involve several different concepts at once, including detection of relevant data, derivation of clinically meaningful parameters, generation of interpretations or conclusions, degree of physician involvement, and degree of machine-initiated action. The April text helpfully clarifies some of these boundaries, but that clarification also suggests the taxonomy may still be evolving conceptually.

Fourth, the current examples are useful illustrations, but I would encourage additional real-world testing of the framework itself. A helpful next step might be to apply the taxonomy to a larger set of actual services using multiple expert reviewers, and then examine areas of agreement and disagreement. That kind of testing could help identify borderline cases and improve confidence that the taxonomy is reproducible and practical in use.

Finally, I would respectfully note a process concern. The standard CPT comment template appears designed for ordinary code applications, with fields such as proposed code text, alternative coding, technical definition, and physician work vignette. Appendix S is different. It is a broader taxonomy and policy framework, and comments on readability, conceptual design, terminology, and validation do not fit naturally into a conventional code-application format. For this reason, it would be helpful if comments on Appendix S could be accepted and reviewed as a free-standing letter, or at minimum with a clear option for a substantial narrative attachment.

In summary, I respectfully suggest five steps: provide a clean April 14 Ballot B, consider whether a fresh draft would now be preferable to continued redlining, continue evaluating whether the current three-category structure is the clearest long-term framework, perform additional real-world testing of the taxonomy, and allow letter-style comments for this type of proposal. These steps would improve the clarity of review and support stronger feedback from stakeholders.

Sincerely,
[Your Name]

Portal-friendly shorter version

Thank you for the work on Appendix S. I offer several comments in the spirit of improving clarity and usability.

First, the April 14 ballot text is difficult to review in heavily redlined form. Because the draft appears to make meaningful changes, including its framing as guidance for CPT descriptor terminology, its emphasis on software outputs and their role in clinical care, and its refinements to the category definitions, a clean Ballot B text would greatly assist reviewers.

Second, the extent of revision suggests that a fresh clean draft may now be more useful than continued redlining. The April draft appears to do more than clarify language. It broadens the discussion of augmentative outputs and clarifies that “automatic” derivation may apply to either augmentative or autonomous outputs.

Third, I encourage continued review of whether assistive, augmentative, and autonomous is the clearest long-term structure. The framework seems to address several distinct concepts at once, including detection, parameter generation, interpretation, physician involvement, and machine-initiated action.

Fourth, examples are helpful, but additional real-world testing of the taxonomy would strengthen confidence in its reproducibility and practical use.

Finally, the standard CPT comment template seems designed for ordinary code applications, not for a broader taxonomy and policy framework such as Appendix S. It would be helpful to allow letter-style comments or a substantial narrative attachment for this topic.

If you want, I can compress this further to something like 1,000 characters, 1,500 characters, or 2,000 characters.

Wednesday, April 8, 2026

Rural Health Technology Links

 

Revisiting the rural and urban divide in hospital health ... - PMC

by AS Yan2025Cited by 1 — However, adoption alone is not enough. Ensuring that these technologies translate into equitable use remains a critical challenge. Sustaining ...Read more




J Rural Health. 2025 Dec 10;41(4):e70100. doi: 10.1111/jrh.70100
Revisiting the rural and urban divide in hospital health information technology adoption
Evidence from 2023
Alice S Yan 1,2,3,4,✉, Teagan K Maguire 1,2,3,4, Jie Chen 1,2,3



Mar 24, 2026 — → It comes as the Trump administration is pushing the increased use of AI to help transform rural health. Health Secretary Robert F. Kennedy Jr.Read more
Apr 9, 2025 — Technological gaps handicap rural hospitals as billions in federal funding to modernize infrastructure lags. The reliance on outdated ...
Digital technology has the potential to help with this and other communications problems. Patients no longer need to visit doctors' offices to be reminded to ...Read more
Missing: enough ‎| Show results with: enough
Mar 30, 2026 — Rural hospitals that approach telehealth this way consistently see improvements across patient outcomes, clinician satisfaction, and financial ...Read more

by ME Frisse2005Cited by 6 — The costs that these organizations face when contemplating necessary information technology seem formidable to the point of not being affordable.Read more
Missing: enough ‎| Show results with: enough
Rural healthcare providers face three primary IT hurdles: Lack of broadband access, telehealth availability and overall system cost constraints.Read more