Wednesday, February 11, 2026

Consent about AI

There are various use-cases where a Patient might consent or dissent to various uses of AI.

  1. A patient might consent to use of AI for clinical decision support
  2. A patient might deny use of their data for training of AI.
  3. A patient might consent to use of their data in de-identified form for training of AI.

Given the Consent model, the Patient might be indicated in a Consent authorizing either: 

  • Generically allowing or denying AI by PurposeOfUse 
  • Specifically allowing or denying a specific AI by referencing the Device resource for that AI.

PurposeOfUse

The most clean method is to use the PurposeOfUse as the basis for the provision in the Consent. This allows the Consent to be independent of the specific AI system or model, and thus not require updates as new AI systems or models are developed. The PurposeOfUse can be used to indicate the reason for the AI access, such as MLTRAINING for training of AI, or TREATDS for clinical decision support.

Further we look to PurposeOfUse Vocabulary to indicate what the reason the AI is giving for accessing data. For example, the PurposeOfUse of MLTRAINING is defined for when an AI is looking to train on data. The PurposeOfUse of TREATDS is defined for when an AI is looking to provide clinical decision support, or PMTDS when AI is looking to provide analysis for payment decisions.

The use of PurposeOfUse does require that any accesses the AI does, or an agent feeding the AI, must use the given PurposeOfUse code when accessing data. This is a trust model that the AI or the agent feeding the AI will accurately indicate the PurposeOfUse when accessing data. However, this is a common trust model used in many other aspects of healthcare data access and thus is not unique to AI.

Allow AI for ML Training

* provision.type = #permit
* provision.purpose[+] = $purposeOfUse#MLTRAINING

Consent example: Allow ML Training

Deny AI for ML Training

* provision.type = #deny
* provision.purpose[+] = $purposeOfUse#MLTRAINING 

Consent example: Deny ML Training

Allow AI for Clinical Decision Support

* provision.type = #permit
* provision.purpose[+] = $purposeOfUse
#TREATDS 

Consent example: Allow AI for Clinical Decision Support

Deny AI for Clinical Decision Support

* provision.type = #deny
* provision.purpose[+] = $purposeOfUse#TREATDS 

Consent example: Deny AI for Clinical Decision Support

Specific AI Systems or Models

For this we look to current identification of AI as a FHIR Device resource. This Device would be indicated in a Consent when a specific AI system or model is identified in a Consent.provision.agent.reference with a permit or deny provision.

This model requires that all access by an AI are attributed to the FHIR Device describing the AI. This might not be the case given how the AI is orchestrated. This model also is fragile as a new model or software would be a new Device, and thus would require a new provision in the Consent to indicate consent or dissent for that new AI.

Allow a specific AI for a specific purpose

In this case there is simply a provision indicating that the AI is permitted. There is no purposeOfUse indicated, but that could be an additional restriction. There is no other restrictions on the kinds of actions or the kinds of data, but those could also be additional restrictions.

* provision.type = #permit
* provision.agent.reference = Reference(Device/AIdevice) 

Consent example: Allow specific AI for specific purpose

Limitations on AI Access

In the FHIR Permission there is a concept of a "limit" which is limits placed on a permit provision. Where the limit might be an obligation or refrain, might be a specific additional data tag, or might be explicit removal of data elements. I have made an extension to replicate this "limit" concept so that it can be used on a Consent.provision. A "limit" should never be allowed to expose data where that limit can't be enforced. Specifically meaning that the recipient of the data must be trusted to enforce the obligation or refrain indicated. PermissionRuleLimit Extension

In FHIR R6 one could have a Consent that holds the provisions as a Permission resource, and thus the limits capability is available.

Allow AI for ML Training on De-Identified Data

* provision.type = #permit
* provision.purpose[+] = $purposeOfUse#MLTRAINING
* provision.modifierExtension[limit].extension[control].valueCodeableConcept = $obligation#DEID  

Consent example: Allow ML Training on De-Identified Data

Conclusion

The above examples are showing simply how a Consent.provision iteration can carry permit and deny to indicate consent or dissent for AI. The examples are not exhaustive, and there are many other combinations of provisions that could be used to indicate consent or dissent for AI. The examples are also not indicating any specific data elements that are being allowed or denied, but those could be added as additional restrictions on the provision.

The reader should be able to take a quilted Consent that has various provisions indicating consent or dissent for various clinical use (TPO) and add in provisions indicating consent or dissent for various AI use-cases, and thus have a single Consent that indicates the patient's preferences for both traditional clinical use and AI use.

The above examples are available in a draft IG on Consent About AI. That IG might further refine and improve beyond this blog article.

Thursday, February 5, 2026

Security Labeling Service - Reference Implementation

I have spent far too much time with Co-Pilot AI, but I am so happy with the output. I have vibe coded two applications. I wrote in a readme what I wanted done, and co-pilot produced a working application on the first try. I then spent two days improving (adding features and robustness) them. I never needed to touch code, I just typed what I wanted changed, improved, or fixed.

I have insisted that the apps clearly indicate their Provenance: 
This application was developed by GitHub Copilot (Claude Sonnet 4.5) ..., at the direction of John Moehrke of Moehrke Research LLC

SAMHSA ValueSet viewer

The fist vibe coding project I had was to create a github.io app that allows me to see the contents of a given set of ValueSets from SAMHSA. The reason I needed this is because some of these are too big for the IG Publisher to render the expansion. I had asked to have a setting that would set the IG Publisher 1000 entry max to 2000. But this request was rejected. So, this was my inspiration.

All I did was ask co-pilot to make me an application that can use a FHIR defined $expand operation against the tx.fhir.org server, for a list of ValueSets by url; and display the results.  

This was a total of 15 minutes of my time. I then proudly told my family, and my boys proceeded to break it. Turns out a feature I asked was the "Check All Sizes", and that was introduced a bug if run first. So, I told co-pilot about this effect, and it fixed it. Add 5 more minutes.

Try it out at SAMHSA ValueSet Viewer. Don't beat on it too much as it does use tx.fhir.org.

Security Labeling Service - Reference Implementation

This one is far bigger, and I have been thinking of asking AI to make this one for me. I created a github repository, and wrote a README.md with just a few words about what I wanted done. Mostly what you see at the beginning of the current README.md; although this has been touched up as I asked for more features. 

Again, the result of the first try worked. I have since improved it in ways that I failed to explain in my original README.md ask. I failed to explain that I wanted a docker deployable server, that the API must be FHIR $operation compliant, that it needs to support ValueSet with multiple topic values, etc.

I was impressed that it started with a sample ValueSet bundle, and sample data Bundle. Very simplistic, but reasonable. These have since been updated to test some of the features added.

What took the most time is that I wanted to be sure that this SLS worked. For this I needed to have complex ValueSets, and complex Data. In both cases I have been working in SHIFT-Task-Force on these very things. I had an IG that had both together.  First I chose to break the data use-cases out from the SLS and ValueSets. The main reason was that I know there is going to be significant improvement in the data use-cases; and the ValueSets cause the build to be very slow. In the ValueSets, I have many to choose from, but I chose to use the ValueSets that are derived from the existing LEAP SLS Reference Implementation. This is an early open-source and suffers from having the codes hard-coded into the source code. 

So I have the data from the use-case, and it is "in theory" already properly tagged. Turns out, that tagging had some errors. I had ValueSets, but they needed to be rearranged and have topic indications. It is this topic indication that is key. These ValueSets are specific to a kind of sensitive data. That is to say the definition of what is composed in the ValueSet is a bunch of clinical codes or hierarchy of codes. The ValueSet then needs to be identified with the Sensitive code that it represents. That is to say that ValueSet (A) has a topic of "BH" (in the hl7 vocabulary this is behavioral health), and composed in the valueSet is behavioral health indicating codes from loinc, snomed, icd, etc.

Testing these ValueSets and data Bundles did find a few more bugs, and a few more features to add. I do have even bigger ValueSets and data to try, derived from S

I'm not going to go deeper here, as this is available as Open-Source, and there is an Implementation Guide with the defined FHIR Operations.

Conclusion

Next up is to see if my kids can break this.  Another reason to not further explain it here, as any fixes I make will show up on the github.

I will note that my household is odd. My kids hate AI, with a passion. I seem to be doing okay with it. One would expect that the old-man would be the one with an aversion to AI. I am very suspicious, I have seen it really mess up, and I have seen the movies enough to worry about what it might do. But I choose to work with it in order to make it better at helping humans.