Friday, December 12, 2025

AI Assisted Patient Appointment Traceability

The following scenario is just an example of AI use and AI Transparency impact. The intent of the use-case is to show that where AI gets engaged in the Patient care, attribution to the AI needs to be clearly indicated. The AI use in specifically Patient Appointment is not what I am endorsing but rather using it as a representative interaction for the purpose of showing Provenance and thus Accountability to AI use.

  1. Patient provides lab test specimens prior to appointment.
  2. AI analyzes lab test results along with patient history.
  3. Patient appointment with Doctor considering AI report.
  4. Patient care improved by AI

Detailed Steps

  1. Patient is scheduled for a routine check-up appointment.
  2. Patient had provided specimens for lab tests prior to the appointment.
  3. On the day of the appointment, an AI is called to analyze the lab test results.
  4. The AI considers the lab test results, related to prior lab test results, current conditions, current medications, and family medical history.
  5. The AI generates a summary report highlighting any abnormalities or areas of concern.
  6. The AI summary report includes various actions that could be recommended based on the analysis.
  7. During the appointment, the healthcare provider reviews the AI-generated report with the patient.
  8. The healthcare provider discusses any abnormalities or concerns identified in the report.
  9. The healthcare provider considers the recommendations from the AI generated report and recommends further tests or lifestyle changes if necessary.
  10. The patient is given an opportunity to ask questions and discuss their health.
  11. The appointment concludes with a follow-up plan, if needed, and scheduling of the next routine check-up.
  12. The AI-generated report is stored in the patient's medical records for future reference. Patient AI Summary
  13. The healthcare provider documents the appointment details and any recommendations made.
  14. The patient receives a summary of the appointment and any next steps via their patient portal.

Patient AI Summary

This document outlines the steps involved in a typical patient appointment for a routine check-up, including the integration of AI analysis for lab test results and AI recommendations.

In this case, since the Patient AI Summary is generated by the AI, the author of the document is the AI system itself. The document may also be tagged with metadata indicating that it was AI-generated.

The summary would itemize the list of history, conditions, medications, lab results, and family history that were considered by the AI in its analysis. It would indicate the new lab test results that were analyzed in the context of prior lab test results and the patient's overall medical history. It would include citations to medical knowledge bases or guidelines that the AI used to inform its analysis and recommendations.

The recommendations would each include a rationale, linking to evidence from the patient's data and relevant medical literature. There would be discussion of benefits, risks, and side effects.

AI Provenance

Provenance information about the AI analysis is recorded to ensure transparency and accountability. This includes details such as the AI model version, data sources used for analysis, and any relevant parameters or settings applied during the analysis process.


Audit the AI

An audit trail is maintained to track the AI's analysis process, ensuring that all steps taken by the AI are documented for future reference. This includes logging the input data, analysis steps, and output results. This is different from Provenance as it records the searches into the patient medical record that the AI made to gather information for its analysis. The audit record of a search typically includes the search request parameters, and does not include the response to the search request. As such the audit analysis would re-run the search to determine what was returned. For example a broad search on a patient record would include all medical history. The AI would likely not process some of this medical history that is determined by the AI to be not relevant. As not relevant data, it would not be included in the AI Provenance as data used by the AI Analysis. Data the AI considered not relevant to the analysis, such as resolved conditions, resolved broken bones, prior medications no longer being taken, etc. The AI may appropriately pull all historic medical data, as there may be some relevant data in the historic record. The AI can quickly determine what is relevant and what is not relevant. The Audit would include the search of the full medical history, while the Provenance would only include the relevant data used by the AI.

The Audit would include a independent Audit entry for the creation of the Patient AI Summary document itself. This might include the data used, depending on the configuration of the audit system.

If there is some business rule, or privacy consent restriction, that would prevent the AI from accessing certain data in the patient record, the Audit would include the access control denial.

The Audit log would cover everything found in the Provenance, but would be less succinct.

Encounter Documentation

The healthcare provider documents the appointment details, including any findings from the AI report and recommendations made during the consultation.

The writing of this documentation may also be assisted by AI, which can help summarize the key points discussed during the appointment and ensure that all relevant information is accurately recorded in the patient's medical record. This is a different use of AI from the above, and has different inputs and outputs. This documentation would be authored by the Doctor, with assistance from the AI. Thus another Provenance indicating the AI assistance in documentation, with authorship attribution to the Doctor.

Patient Summary

The patient receives the summary of the appointment, including any next steps or recommendations, via their patient portal for easy access and reference.

AI slop remediation

Now imagine that the healthcare providing organization has learned that the AI model they were using make specific mistakes with specific kinds of lab results. The organization can find all of the Provenance attributed to that AI Model, thus the subset of outputs that that AI Model influenced. They could further find those Provenance that have a .entity relationship with a given AI Prompt known to have produced poor results, so now have the subset of instances where the AI was used with the defective AI Prompt. They can then review those outputs, and determine if any patient care was negatively impacted. If so, they can reach out to those patients to remediate the situation. This is an example of how Provenance enables accountability for AI use in healthcare.

New AI software, Models, and Prompts

When new AI software, models, or prompts are introduced, the healthcare organization can track their adoption and usage through Provenance records. This allows them to monitor the performance and impact of the new AI tools on patient care. If any issues arise, they can quickly identify which AI tools were involved and take appropriate action to address any concerns. This ongoing monitoring and accountability help ensure that AI integration in healthcare continues to benefit patients while minimizing risks.

The change would be represented in a new Device resource representing the new AI software or model, and if there is configured prompt this would also be represented in the Device resource.

The Provenance records for AI analyses would then reference the new Device resource as the .agent, allowing for clear tracking of which AI tools were used in each analysis.

Conclusion

The AI Transparency IG includes standards for recording that data was influenced by AI. The IG does not try to control how AI is used or restrict how the AI Transparency are used. The examples given in the guide are very focused on minimal expression for illustrative purposes. I try here to express a more realistic use-case so as to drive more clear understanding of the benefit of AI Transparency.


Wednesday, December 10, 2025

Controlling AI in Healthcare

AI must be controlled. That is to say that AI accessing data and making data is a privileged activity. It is not uncommon during the early days of a new technology for that technology to be uncontrolled. It is not uncommon for Security to be seen as There are three specific moments when AI needs to be controlled. \

  1. when the AI is trained on a dataset, 
  2. when the AI is used to make treatment decisions (e.g. on a given Patient),
  3. when the AI is used to make payment decisions (e.g., on a given Patient)

Teaching

Teaching an AI/ML/LLM with dataset needs to be controlled to prevent ingestion of data that is not authorized to be used for this purpose. With this use-case, HL7 has identified a specific PurposeOfUse that would be used to indicate this teaching/training purpose - MLTRAINING. With this code a few things can be done:


When the training is done, the authorization request is for MLTRAINING PurposeOfUse. Thus, the access control will either permit or deny such a PurposeOfUse, and the authorization would be audited as such. This PurposeOfUse would not be given to Agent that is not authorized to use this PurposeOfUse. Thus, this PurposeOfUse can't be used by other actors.

A Dataset can be marked as forbidden for MLTRAINING PurposeOfUse, which would make that Dataset unavailable for training. This, in theory, could be done down to the data artifact basis.

There is a standard in the general AI world that I helped create to tag datasets with Provenance and Authorizations including the license that would need to be followed if the data are to be ingested by an AI/ML/LLM. The Data & Trust Alliance has published this Data Provenance Standard, that is elaborated on here.

Patient based Consent on Teaching

This MLTRAINING PurposeOfUse could be leveraged in a Patient specific Consent. This would enable a Patient to indicate that they do not want THEIR data used to teach an AI. This would mean that the Access Control is more fine-grain, in that each datum pulled from the database must be checked to see if the given subject of the data (the Patient) has authorized, or did not deny authorization for AI to learn from their data.

Treatment Decisions

There are other PurposeOfUse when the AI is used during treatment (TREATDS) or payment (PMTDS) decisions. These PurposeOfUse are specific to the outcome, and are therefore distinct so that business rules or Patient Consent can allow one but not the other. They would otherwise work rather similar.

The most likely use-case is one where Patients get to indicate that they do or do-not want AI used in making Clinical Decisions (or Payment Decisions). This is diagrammed below where each Patient has a Consent with a term around PurposeOfUse of TREATDS of go or no-go; and that is used by the AI System authorization to allow the AI to make decisions, and thus look at historic patient data.

Conclusion

These PurposeOfUse already are defined for these purposes. There may be other PurposeOfUse codes that need to be defined, this is a good exercise for discussion. The above scenarios are also not the only ones, and indeed these scenarios might not be the most likely or most useful ones. My point in this article is to show that we (Security WG) have done some thinking and developed some standards codes.



Healthcare AI Transparency ballot

Healthcare use of AI needs to be Transparent, clearly labeling and attributing when patient data was created or influenced by AI. This is the goal of a new Implementation Guide going to HL7 Ballot really soon. This Implementation Guide will also be the focus of an HL7 FHIR Connectathon testing track in January.

The guide is designed for health IT developers, clinicians and institutions that use AI (including generative AI or large language models) to generate or process health data. It provides a common format so downstream systems and human users can see what data came from AI — when, how, and by which algorithm. This helps them judge whether AI-derived data are reliable, appropriate, or need further review.

Key features include:
  • Tags or flags on FHIR resources (or individual data elements) to mark AI involvement.
  • Metadata about the AI tool: model name and version, timestamps, confidence or uncertainty scores.
  • Documentation of human oversight (for example, whether a clinician reviewed or modified AI outputs).
  • Traceability: which inputs (e.g., clinical note, image, lab result) were fed to the AI, and how outputs were used to produce or update health data.

For stakeholders — such as patients, clinicians, and health-system administrators — the main benefit is transparency. Users can tell whether data was AI-generated or human-authored, which supports trust, safety, and informed use of AI in care.

And when the AI model or prompt is found to produce unsafe recommendations, then this transparency indications can be used to find potential problems that can then be reexamined.

AI will be used, and attribution to that use will help us deal with the data in the future.

Monday, October 20, 2025

Age Verification is much more important than porn

There is much talk now days, driven by some regulations around the globe, of a need on the internet for services to know a user's age. The main one that comes to the discussion is to protect children from accidently seeing porn. This use-case is hiding a much more important problem that must be solved at the same time.  The porn problem is rather easy to argue is a universal "good" use-case. Not many will be able to argue against this use-case from any perspective. Thus, it is used to hammer a solution into existence. But once that solution exists, it will be used for many use-cases that are not as "universal good". Meaning it will be used by some governments against small groups that have much less leverage than the porn industry has.

Parent solution:

Many solutions that are being proposed today have 'the parent' indicate their children's 'age'. This seems like a good solution for a while, but who proves that that individual is 'a parent' and specifically 'the parent of that child'? These solutions are trying to build a sound logic upon ground that is not solid.

What is Age limited

Porn is easy to identify as a problem, and as I have said above it is easy to agree. One might add some topics like online gambling as easy to identify and universally agreed to.

In the physical world we have access to Alcohol, Tobacco, Vaping, and other drugs; along with Driving, Voting, Military Services, Credit Cards, Car Rental, and even solo travel. In the physical world these are controlled at the source, where they item or service is dispensed. 

In the mixed physical and virtual world, we somewhat have a history (mostly failed) with Movies, Music, and Video Games.  It can be argued that these were early efforts that if we had age verification that these would be more effectively controlled.  These are all, like porn, in that they are rather universally agreed to.

Problematic Age Limited

Less clear are other information (internet) topics that "some" people consider should be "age limited". Who are these "some" people, and what criteria are they using to determine what is "age limited"? I am sure many of the things beyond porn will NOT be universally agreed to. Which means that in one location topic ABC is age limited, and in another area it is not. Some of these topics are deep/heavy topics, like abortion; while others are stigmatizing topics that are appear to be simply embarrassing. But all of them can be leveraged to great harm by governments, parents, spouse, peers, and bullies.

- Abortion (information, consulting, or services)
- Sexual Health
- Self-harm
- Addiction
- Trauma
- Telehealth
- Weight advice
- LGBTQ+
- sex education and reproductive health
- domestic violence, sexual assault
- emotional abuse
- child abuse or neglect
- homelessness
- poverty
- ADHD
- chronic pain
- autoimmune disorders
- emancipation or foster care
- etc...

The problem is not that these information topics exist, but rather that anyone seeking these information must provide age verification; and the government must NOT be able to determine who has tried to gain access to these information.

Note that someone might be simply intellectually curious, or doing research for school, or helping out a friend. But because they search a topic, they will be vulnerable to being discovered as having been interested. Being interested should not be a crime, even in government regions where the act is a crime.

Age Verification Service

There is good discussion going on about the design and standardization of these services. The discussion more broadly is mostly about how those that provide an "age limited" service want to use an "age verification" service so that they don't have to do this difficult task. This is a good topic to discuss as the doing this wrong is easy and exposing the individual privacy is common.

What is not discussed broadly, but I have confidence that in the standards this is discussed, is how the "age verification" service must also be isolated from knowing WHY the age assertion was requested. This is to say that the "age verification" service can't become the thing that a government can subpoena to turn over records so that the government can know the individuals that have been seeking "abortion" information (for example). 

The governments will want to be able to do this subpoena, so they are not going to be pointing out this privacy problem. Much like they want encryption backdoors, they want backdoors to age verification.

Thus, the solution must be blinded BOTH directions; this is what makes it so much harder.

The Age Verification Service must not have an audit trail. None at all. It is far better for it to have failed "open" (allowing access when it should have been forbidden) than for the whole service to expose the whole population that it serves. Privacy Principles must be prime.

Age Verification Service problem

The App stores, like Apple and Google, are being challenged to provide these Age-Verification services. If they focus on the easy use-cases they will not see the hard problems. I hope that they are not blind. Once we have a solution, however flawed it is, it will be used everywhere.



Monday, October 13, 2025

Modern view on Pseudonymization

For years, the terms 'anonymization' and 'pseudonymization' described distinct technical methods for de-identifying data. But if you're still thinking of them that way, you might be behind the times. Driven by regulations like GDPR and court decisions, the focus has shifted from pseudonymization as the method to pseudonymized is the dataset itself. Key is who possesses the re-identification method. This subtle change has profound implications.

Ten years ago, I worked on the De-Identification Handbook with IHE and also on the Health Informatics Pseudonymization standard within ISO at that time the concept of de-identification was broken down into two kinds there was "anonymization" and there was "pseudonymization".

Where anonymization had no way to reverse and pseudonymization had some mechanism for reversing the pseudonymization. At the time these were seen as methods not as the resulting dataset. These methods would be used to identify how data would be De-Identified. The resulting dataset would then be analyzed for its risk to re-identification. That risk would be inclusive of risks relative to the pseudonymization methodology.

Today IHE is working on updating the De-Identification handbook. I'm no longer working on that project due to my employment situation. But while I was working on it before then the other subject matter experts were insisting on a very different meaning behind the words "pseudonymization" and "anonymization".

The following podcast by Ulrich Baumgartner really opened my eyes to how these words got a different meaning. They got a different meaning because they are used in a different contextual way. Whereas before the words were used purely as explanations of methodologies, they are today more dominantly used as words to describe a dataset that has either been pseudonymization or fully anonymized.

[The Privacy Advisor Podcast] Personal data defined? Ulrich Baumgartner on the implications of the CJEU's SRB ruling #thePrivacyAdvisorPodcast https://podcastaddict.com/the-privacy-advisor-podcast/episode/208363881




Where today because of GDPR there is a bigger focus on the dataset than the methodology. GDPR sees "pseudonymization" as a word describing the dataset that has only been pseudonymized but is still in the hands of the organization that possesses the methodology to re-identify. This is contextual. Therefore, the contextual understanding of that dataset is that it is contextually in the hands of an organization that has the ability to undo the pseudonymization. Therefore, the data are NOT de-identified. The data becomes de-identified when the pseudonymization re-identification mechanism is broken, that is to say when the dataset is passed to another party while the re-identification mechanism is NOT passed to that party.

This is the key point that is adding clarity to me. To me, the organization that is using pseudonymization is preparing a dataset to give to someone else; the first party organization already has the fully identified data, thus the pseudonymized data is not something they intend to operate on. It is the NEXT party, the data processor, that gets the dataset and does NOT get the re-identification mechanism. It is this NEXT party that now has de-identified data. 

I now do understand the new diagram, as there was a diagram that was drawing distinction between Identified data, and Anonymized data; with the transition of data from Fully-Identified->Pseudonymized->Anonymized. I saw this diagram, and it did not align with the original methodology perspective, but it does follow with this contextual/relative perspective.

Overall, this understanding is consistent with the original "methodology" meaning of the words, but for some reason the GDPR courts needed to say it out loud that the FIRST organization doesn't get the benefit of de-identification until they pass the data to the NEXT organization. This concept is why

There are some arguments within the GDPR community as to whether it is ever possible to make anonymous data out of pseudonymous data. This because there is SOME organization that does have access to the re-identification mechanism. As long as someone has that ability, then some courts see the data as potentially re-identifiable. That conclusion is not wrong on the blunt fact, but it does not recognize the controls in place to prevent inappropriate use of the re-identification mechanism. The current courts do see that there is a perception of a pathway from pseudonymization to anonymization.

Pseudonymization is more like Encryption than Anonymization

The interesting emphasis at this point is that within Europe under GDPR pseudonymization of a data-set is much like an encryption of a data-set. Both encryption and pseudonymization are seen as purely methodologies of protecting data, neither are a clear methodology to gain anonymization.

Conclusion

GDPR has placed a different emphasis on pseudonymization with the default meaning is where the data holder has used pseudonymization methods but still holds the re-identification key. This state of the data transition was never mentioned in the past, as ultimately the goal of pseudonymization is to produce a dataset that could be passed to another organization who does NOT get the re-identification keys. Whereas in the past we would have said that the other organization got a pseudonymized dataset without ability to re-identify; GDPR would now say that the other organization got an anonymized dataset.

Friday, October 10, 2025

How are complex trust networks handled in http/REST/OAuth.

 > How are http/REST authorized in complex trust networks handled? 

I don't have all the answers. This has not been worked out. I am not holding back "the" answer just waiting for someone to ask.

Whereas in XCA today we use a network of trust (saml signers certificate authorities, and tls certificate authorities), and the network communication also goes through "trusted intermediaries". 

In OAuth there are no "Trusted intermediaries". The search parameters and responses are always point to point between the one requesting and the one responding. The OAuth token used in that point-to-point request/response has been the hard thing to create. Where OAuth has a mechanism to "discover" who that responding service trusts. This is advertised as well-known metadata at that responding service endpoint. So, the Requester queries that well-known metadata, and from that data it then needs to figure out a trust arrangement between the requesting OAuth authorities and that responding trusted OAuth issuers. 

A. Where no trusted third party is needed

The majority case that is used very often today is that the well-known OAuth metadata can be directly used by the client. Client asks that OAuth authority to create a new token, given the requester token, for authorization to access the responder system. 

THIS is what everyone is doing today with client/server FHIR RESTful. This is what everyone looks to get their system to work with OAuth

The token has some lifetime and scope; and is used for multiple request/response. Again, this is normal. and this fact is normal for all uses of OAuth.

B. Where a trusted third party is needed

The case where the requester does not have a trust relationship with that responder defined OAuth authority is where the hard work comes in. In our use-cases where the requester and responder are in different communities. Like with XCA some trust authority is needed. Like with XCA discovering who that trust authority is the job of directory services. 


Ultimately the requesting system finds a trusted OAuth issuer, and it asks for a new token, given the requesting system token, be generated targeting the responding system. Once this token is issued then the requester can do http/REST/FHIR direct to the responding service endpoint using the internet for routing, with that last OAuth token. The responding system can test that OAuth token is valid.

In the healthcare scenario we might want to force an unusual nesting of prior tokens. In this way the responding service can record who/why and from where the request came from. This nesting is not typical and considered complex to implement and parse.

see:  OAuth 2.0 Token Exchange (RFC 8698)

C. Where multiple trusted third parties are needed

I think that the (B) solution can be iterated or recursed on infinitely. 

SO:

The main point of OAuth is that you get a new OAuth token issued for a given target/scope based on the OAuth token that you have. EACH OAuth authority makes a permit or deny decision; hence why an issued OAuth token is always a statement of authorization. If you were not authorized, you would not be issued a token.

In this way the authorization is established up-front; and the data transactions reuse that token until it expires. Thus, the up-front authorization may be expensive, but that token is reused 1000 times in the 60 seconds it is good for (simplified for illustration sake)

Caveat Emptor

I have no idea if the above is right. I think it is close, but I don't know.

I welcome commentors to correct me, especially if they can point at standards profiles that have been established. Especially if these standards profiles are established in general IT, not specific to healthcare. I am suspicious of healthcare experts who invent healthcare specific standards profiles.

Monday, September 29, 2025

FHIR RLS - Record Location Service

I was asked

> Does an IG for such a thing exist (FHIR RLS)? I was wondering if IHE did this? Part of MHD?

 
Not fully. IHE has PDQm, which has most of what is needed,  but no one has brought federation to IHE to solve. PDQm supports a FHIR way to do Patient Identity resolution. It supports a few models

  • Demographics to identity
  • Identifier to identity 
  • Fuzzy match to identity 
  • Search to identity 
The result is one of more Patient Identity. Some of them might be already correlated to the same individual, some may be alternatives. This is common support for a RLS.

What is missing is an indication of the community that the given identity exists within. When using MHD the assumption is that your MHD Document Responder can figure this out on the backend. This the PDQm + MHD client doesn't need to know. This gap is being discussed now. 

The second thing that is missing is some mechanism for the PDQm server to seek out partners that might have identity matches. This mechanism is not defined today in IHE XCPD, so might not need to be said for FHIR. I expect some may want that.

The third thing, that is needed, is a way to translate a community identifier to network communication mechanism. This is available in mCSD. This mechanism can work like it would for XCA, listing XCA gateways; or could be more Internet based simply listing FHIR endpoints.

There is a very good white paper from Grahame in HL7 on Intermediaries. This multiple levels of services is a vision like what IHE has with XCPD+XCA, but for full access to FHIR services. There are some solutions proposed, but no further solution defined. HL7 didn't want to work on it as it is not core, so plan was to have IHE work on it with backing from HL7. The problem is that although the problem was presented to IHE IT-Infrastructure, not enough interest in working on it came forward. Thus, a gridlock. 

These struggles, there is XCPD, which is not FHIR, but would work to find identity at community, lookup in mCSD to find, the FHIR servers.

 


Monday, September 22, 2025

The fall of the Profile

 FireLy has looked at #FHIR use, and came to the conclusion "Too many profiles, not enough reuse...". I agree and find this trend very troubling.

IHE started the concept of Profiling 25 years ago. An joint effort of Vendors and Users. The Users would collaborate on use-case based needs, needs fully focused on outcomes and overcoming problems. The Users tempted the Vendors with a promise to "buy" if the Vendors agreed to One solution. Economics drove this to succeed.


Lately neither of these parties are leading, rather it is Governments and Consultants (yes I am now a consultant). This not only doesn't have the right Market forces, but is not done globally. with no global focus the solutions are regional.. all different.



Thursday, September 18, 2025

AI use Transparency in Healthcare: Building Trust Through Provenance

I want to bring some additional visibility to a project I am involved in regarding AI transparency in Healthcare. The goal of Transparency is to be able to indicate when data in the Medical Record has been influenced by AI, this is an important goal to providing Integrity of the use of AI. 

The Challenge: A Spectrum of AI Influence

The goal of our project is to indicate the level of AI influence on medical data. This isn't a simple "yes or no" question, but a spectrum that includes:

  • AI-authored data: The data was created entirely by an AI.

  • AI-recommended data: An AI suggested the data, and a human approved it.

  • AI-assisted data: An AI helped a human in some way, but the human was the primary author.

To address this, we're using two key approaches: data tagging and provenance.

Data Tagging

With data tagging, this is simply a tag of the kind of interaction that the AI had with a data object. So it is not useful to explain the details of the interaction beyond a generalizable kind of interaction. This tag is however helpful as a flag for those who want to know when data was influenced. 

One use of a simple tag is to recognize that the object may be not original thinking. There might be recognition that data that has been influenced by AI might not be as useful to train future models. The tag might also be used simply to know that there is more details in a Provenance.

Provenance

With Provenance we can carry details about what AI, what version, what model, what prompt, what card, etc. The FHIR Provenance is a derivative of W3C PROV, reformed to the data encoding standard that HL7 has based on RESTful Resources. 

We are trying to reuse more general AI standards such as model-card, but find that there is a lack of consensus. I am confident that the HL7 group will use external standards as appropriate.

One might need to know this level of detail to understand the usefulness of the output. One might also use this Provenance to track down AI influence that may have been determined to be suspect or incorrect. This might find decisions that need to be reevaluated.

Element level, not just Resource level

Both data tagging and Provenance have methods of focus on the element level, rather than the whole Resource. For some resources the whole resource is all that is needed to be tagged or referenced, but for some more workflow specific Resources like CarePlan, there are some data within that might be influenced while the whole is not. So, this element level is supported by both Data Tagging and Provenance solutions.

Concerns with Provenance model

A concern I heard was voiced at the connectathon this weekend is that Provenance is hard to work with. I think this is just an educational issue. Provenance is different in that Provenance.target points at the resources for which it is describing the provenance of; and thus the targeted resource does not contain some evidence of the Provenance. There are a few solutions to this:

  1. Use the Data Tag to indicate that the data was influenced by AI, and this gives evidence that searching for Provenance might be useful. When the AI tag is found, one just searches for Provenance with a target equal to the resource you have.
  2. Put the Provenance inside the Resource. FHIR supports a concept of a Resource "containing" another resource. This is used when the contained resource can't stand alone, but can also be used where the outer Resource really wants to carry the inner Resource
  3. Searching for resources, one can use the "_revinclude" parameter to also include any Provenance. Indeed, _revinclude is defined for anything, but the example given is Provenance.

Developing Implementation Guide

The HL7 implementation guide is in development so I don't, yet, have a formal publication to point at. The CI build is -- https://build.fhir.org/ig/HL7/aitransparency-ig/branches/main/index.html

All of the above discussion is already included in this Implementation Guide.

I have other blog articles on AI controls 

Learning Dataset Provenance

Wearing a different hat, I was a standards expert contract with Data and Trust Alliance to help them define a Provenance standard for the datasets that are offered to be used as source-learning material. https://dataandtrustalliance.org/work/data-provenance-standards

Conclusion

These are developing, so please get involved to help us address your use-case and learn from your experience. 

Monday, September 8, 2025

Approach to Product use of Standards

I have expressed a role for me as a standards expert to participate with product development to assure good implementation. This would focus on quality implementation, that is robust, and can then stand the test of time. However, I really don't think that this is a standalone role, but rather a role that someone on the product team plays. Likely a systems architect, maybe the db architect.

Now that I have started my consulting organization, Moehrke Research LLC, I have been approached by people trying to get me to take on this kind of a full-time role. The role is rather consistently defined and defined in a very standalone way with what I think is unreasonable expectations. The Job description includes many years of standards work, many years of product development, many years of healthcare market knowledge, etc. Job titles like:

  • FHIR (Fast Healthcare Interoperability Resources) Architect
  • Lead Data Modeler (FHIR)
  • FHIR Interoperability Specialist
  • Senior IT Solutions Architect
  • Healthcare Solution Architect

I fit these expectations, but I really don't think that what you need is full-time position. I think that it is a great medium sized engagement with me. 

I recommend Build from within

Where someone (or two) from the product team get elevated. Yes, it is an additional role and thus a change in their role. I assure you paying them a bit more to take on this role will be worth it. You need to include a test engineer as well. I work with them 2-3 days a week for a few months, then a few days a month for a few more months, and then a few hours per month for a few more months. Overall, this likely takes 6-9 months. I teach them how to:

  • discover appropriate standards, 
  • approaches to reading standards, 
  • extracting the requirements and alternatives,
  • where to find help, 
  • where to find open-source,
  • where to find test tools and procedures,
  • how to leverage Postel's Law, 
  • how to engage in improving the standard,
  • how to dispute interpretations of the standard,
  • where to get creative and 
  • where to be strict. 

With an engagement like I am proposing, I am providing this guidance over 600-1000 hours; and you walk away with the skills on the team. This is a bargain relative to a full-time position for 6 months. We all have a personal relationship that can handle occasional contact or result in future contract engagements.

More sustainable

The roles that are posted are not possible to be met except by a few dozen people globally. The expectation of number of years of experience, depth of knowledge, and unusual education. There is simply not that many people doing what I have done over the past 25 years. It is very small group (I would like to see it expand). 

Interoperability is not something to build a product around, it is something to build a product on-top-of. Meaning it is not the inspiration for something that doesn't exist. The standard was written because many have needed something like what the standard has defined. 

There are HL7 training certifications that can help, but I see these also as something that someone already on your team adds to their job roles.

Conclusion

Similar is true of the other topic areas I have skills in: Privacy and Security... these are a role, but not necessarily a full-time position. These are all more a culture thing, with a role to watch that the culture is followed.

In very large organizations like Oracle Health, Epic, GE Healthcare, etc... these can be full-time roles; but even there are constant struggles with justifying standalone positions. Even in these large organizations the sustainable position is a role that team members take on.

Build your team from within. I provide subject matter expertise, but your team is key. We all walk away happy and with better Interoperability.


Tuesday, September 2, 2025

Product use of Standards

The third kind of contract I’m well-suited for involves working directly with product developers—whether client or server-side—to ensure their solutions optimally leverage existing standards. This role is often overlooked in traditional standards development but is critical for real-world adoption. While it may seem like a large engagement, it often resembles a small contracts, focused contract. I’ll explore that nuance more in the next blog post.

Government Mandated Standards

A product can be compelled to be compliant with a standard or Implementation Guide (IG). This is common nowadays around the globe with regulation requiring that products and the organizations that use them be compliant with a given standard or IGs. These government efforts are trying to move their realm beyond some point, with the goal of having a better outcome after the standards are deployed.

A good example of how a government required standard can dramatically improve that realm that government controls is electric socket, light socket, or lately USB-C. In these cases, without standardization there were many alternatives that the consumer must be burdened with. By mandating a standard, the products all align on that standard, the consumers don't need to think about that anymore.

Purchase Power

A product may choose to implement a standard because market pressures (customers) demand it. In this case it is the power of the purchase ($$$) that forces the use of a standard. An important perspective here is where the first vendor works with the purchaser to define that which all later must implement. In the case of early Health Information Exchanges, and Radiology Exchanges; this was the dominant method for standards to become required. That is to say those purchasing products demanded that a given IHE-Profile must be used, and that drove mandates. This was the success story for IHE, in that it was a collaboration between those with purchase power in the radiology departments that wanted ONE standard to mandate, and the vendors that knew that one standard would be less overall work. Unfortunately, this story has been lost to time

Implement Once, Innovate Beyond

The overall benefit to using standards that those developing products that need that standard can now be assured that their efforts to use the standard will be reusable over and over; thus, that product development group can focus more on the features and value of the product. A good standard is one that one can implement once and not spend more time on (realistically it takes some maturing to get here). The point is that by using a standard one does not need to constantly adjust how one communicates with peers. 

This use of standards overall benefits everyone. 

What help do you need?

The fact that a regulation picks a standard or IG does not mean that developing a product to that standard is easy. The standard might not be all that easy to read, most standards are hard to read. The needs that the product have might not be expressed in the standard, so some interpolation needs to be done. There are often things that are needed to be implemented that the standard doesn't mention, most of the time it is in an area where the standard wants to be lenient, so one must understand a range of possibilities.

Postel's Law

When I work with your team, I will stress multiple times a day a principle that is credited for the success of the TCP/IP internet. Often called Postel's Law. It has two very different things to say. To those about to send some content to another, be as compliant as you possibly can be. To those receiving some content from another, be very robust and lenient in how you interpret that content. Many people have a problem with this second part as they feel that receivers should be strict, rejecting anything that is not compliant. The problem with this is that it is very fragile and doesn't recognize reality. Reality that often comes along with revisions of the standard over time.

Conclusion

Let me be your thoughtful, experienced guide in the often murky world of standards implementation. 

Thursday, August 21, 2025

Standards Development Contracts

The medium sized contracts that I envision would be where I help an organization develop a standard or defend their position within a standard development project. Where a "standard development" effort is not limited to core standards like FHIR, CDA, or HL7; but inclusive of international Implementation Guides (what IHE calls a Profile, or HL7 calls accelerators), or regional Implementation Guides. 

I have even used the Implementation Guide tooling to produce a private publication (for the VA - MyHealtheVet) that defines how existing data would map to FHIR Resources and be aligned with us-core. I use this tooling in my own experiments as it is a quick way to get a publication that is easy to author and edit over time. 


Leading a standard project takes a good bit of negotiations and consensus building. These are skills that I have been working over the past 25 years (actually more, as I also did this in the internet standards world in the 80s and 90s with TCP/IP, NFS, Telnet, FTP, and a few others that many people today don't remember are foundations of the internet.)

Defending an organizations position is similar but very different. It involves discovering the potential problems and crafting a solution that the author and contributors find as understandable and worthy of addressing. Sometimes this effort is simply helping by providing examples of good and bad outcomes; such as working examples.

Along with this is providing tooling to support internal testing, simulation, and demonstration.

Developing Standards is the best way to develop a market for your product to further enhance. Standards are not a threat to a product unless that product is not truly adding value. By defining standards, one moves the opportunity for improvement up into the application layer.

Organizations, which might be provider or payer organizations, or might be regional organizations, often need to refine a standard to make it more clear for their region, and thus make testing and dispute resolution more effective.

I am well-seasoned to be able to help you with this effort. These projects might be medium, but they might also be small or large. The size is more defined by the outcome needed. Contact me at Moehrke Research.

Tuesday, August 19, 2025

Small contracts

Over the past few years, I have taken on small contracts. These would be a few hours and be focused on delivering a training session or two. These were never big enough projects for my employer at the time, so they allowed me to take these on "the side". I would tend to work these in the evenings and weekends so as to not interfere with my day job at the time.

Now that I am looking for contracts, these small contracts are something I am looking forward to. There is not much fuss in getting them going, and they are a great way for me to interact with groups of people just getting going in Interoperability or Healthcare Informatics. 

Training in Healthcare Privacy and Security


The subject matter that I am known for is teaching FHIR Privacy and Security topics. I have presented a HL7 Tutorial on "FHIR Privacy and Security" many times. I am not limited to giving this tutorial at HL7. HL7 has a recording from a few years ago that is freely available through HL7 sponsored by ONC (now ASTP). If you just want to listen to the recording, then the HL7 recorded tutorial is good enough. But if you have specific use-case that you want me to focus on and have discussion, design, and policy writing; then this might be a good small contract to start with me.

I can also handle going deeper on each of the topics within the tutorial. I have had to make the 3-hour tutorial very high level. Which is a good level for many people, but does not satisfy someone who is focusing on a given topic:

  • Access Control - considering Privacy Consent
  • Access Control - considering Break-Glass
  • Audit Logging - to detect intrusion and investigate
  • Audit Logging - to inform an Accounting of Disclosures or Access Log to a Patient
  • Digital Signatures
  • Document Encryption
  • Consent encoding in FHIR and management over time
  • Data Sensitivity Tagging methodologies and architectures
  • De-Identification / Pseudonymization / Anonymization
  • Provenance

Training in Healthcare Infrastructure -- Implementation Guides

  • IHE IT Infrastructure Profiles
    • XDS / XCA / XCPD -- Document Sharing
    • MHD / MHDS / PDQm / PMIR
    • mXDE -- decomposing Documents into FHIR Resources with Provenance
    • Basic Audit Log Patterns (BALP)
    • Privacy Consent on FHIR (PCF)
    • Digital Signatures (DSG)
  • HL7
    • FHIR International Patient Summary (IPS)
    • FHIR International Patient Access (IPA)
    • FHIR Data Segmentation for Privacy (DS4P)
    • FHIR Consent
    • FHIR AuditEvent
    • FHIR Provenance
    • FHIR Signature
Contact me at Moehrke Research.

Tuesday, July 22, 2025

What's next for me?

It has been a relaxing week, but I am still interested in opportunities for me. I have had a handful of phone calls. I hear that my name is mentioned positively in many conversations, that I am not involved in.  LinkedIn tells me that my announcement has been seen 11,000 times, my blog article only 104. So, you can understand that I am getting really dramatically mixed messages.

What I'm looking for

I have put together a Resume, and doing that did solidify my interests in 

  1. Standards development (FHIR)
  2. Profile development (Implementation Guides)
  3. Use of Profiles and Standards (Apps and Infrastructure that use standards)
I don't like doing the administrative things that a consultant needs to do, like finding new work, billing, following up on billing. I did this a few times over the last few years, and it is outright drudgery. I also don't want to move into a corporate position, like a director of blah, or senior so-and-so. Learning a new corporate process is not inviting, I did that three times already.

I know that I am close to retirement, I can feel the beach sand beneath my feet.  Thus, I understand that whatever I do needs to make this transparently clear. And I recognize that might limit my opportunities. I am okay with that.

Ongoing Contributions

I will continue my work in HL7 and SHIFT:
  • SHIFT on their work to make Consent more implemented. I have been providing subject matter expertise in FHIR Consent and the IHE Privacy Consent on FHIR (PCF) implementation guide. Right now, the team is working on implementing, so I don't have much to contribute. I would like to be involved in code reviews. I am also providing expertise in the discussion with various stakeholders and implementers. 
  • HL7 on their work with FAST Consent, which is taking an administrative step beyond IHE-PCF to define policies and management steps for instances of Consent. This work can only be done in a regional context where regional policies can limit the variability. So, context is critical here. Having reviewed many regional policies and applied them to the development of FHIR Consent, I have a pragmatic and realistic perspective to provide.
  • HL7 on AI Transparency IG, which is using features we built into FHIR for tagging data that was contributed by AI and providing details of that AI actions in Provenance.  I have applied these concepts to IHE profiles and Data Trust Alliance, and other side projects. The power of Provenance is best shown with use-case analysis and examples. 
I will continue with these, even if only Pro Bono. I certainly hope that I have other opportunities that I could contribute to. I think I still have plenty of energy and expertise to be applied to Healthcare Interoperability Standards development and promulgation.



Monday, July 14, 2025

Monday Morning, nowhere to report

This Monday morning started differently—I’m awake, ready to work, but with nowhere to report. 

After nearly nine years at ByLight, my journey there has abruptly ended. ByLight had taken me on as a standards representative back in November of 2016. I have worked on multiple CDA, XDS, and FHIR-based projects ever since. Most recently, I was helping modernize MyHealtheVet—the VA’s patient portal where Veterans securely access their medical records, message their care teams, and manage prescriptions. Our team was about halfway through a FHIR transition and updating the web interface, with Oracle Health (Cerner) integration just beginning to support the VA’s evolving EHR ecosystem.

But the contract was unexpectedly not renewed. We were all let go, and I imagine many of my colleagues are now, like me, seeking what comes next. ByLight fought hard to continue the work, but I don’t know what led to the decision or what the future holds for the portal itself.

What now?

I had expected to retire somewhere in the next 2–5 years, with time to prepare and transition. That plan changed overnight.

Today, I was supposed to co-chair the IHE IT Infrastructure Technical Committee’s face-to-face meeting. Instead, I had to inform my co-chairs and peers that I’m no longer employed and no longer have standing within IHE. Others had scheduling challenges too, so we opted to postpone and shift to our regular t-con development calls instead. IHE will also need to redistribute the roles I held—GitHub administration, IG publishing, and more.

I’ve also informed HL7, and they’ve revoked my authority. I recall from before that HL7 has a method to extend membership continuity, but I haven’t heard whether that will apply to my case.

What's next?

While I had begun thinking about retirement, this came too soon. I’m now exploring consulting work—perhaps independently, perhaps through a contracting organization. I’m not interested in stepping into a dramatically different role or climbing further up the leadership ladder; when I look at what’s done “up there,” I don’t find much that sparks inspiration. It doesn’t feel like the right way to wind down. More likely that I use my FHIR Implementation Guide experience and skills to help projects and regions on their profiling.

I’ve seen other standards geeks continue consulting into their later years, which I’ve always viewed with mixed feelings. The world needs space for fresh leadership, and that’s hard to foster when the same people continue to occupy those positions.

So, here I am again—like I was in 2016—dusting off my resume and pondering the next chapter. I do have a few camping vacations scheduled from before all this, and I plan to take them. Maybe the timing will turn out to be fortuitous.

Monday, June 2, 2025

How to record that the Patient authored FHIR Resources or elements

Lately there has been more groups thinking about how Patient contributions to the medical record might be distinguished from clinician authored data. Also, how AI contributions could be recognized as distinct. This article will cover a couple of methods that exist in FHIR core, but also exist in CDA and HL7 v2. I will only speak about FHIR.

General need

The general need is to express the provenance of a Resource or an element within a Resource. For this we have two different solutions that are related, but distinct. Two solutions as sometimes one needs a lightweight solution, sometimes you need full powerful Provenance.

  • Security Tags
  • Provenance
As stated above sometimes you want to indicate the whole resource was authored by the Patient, sometimes you just want to indicate one or more elements within the resource was authored by the Patient.

E.g. 
  • Patient record of body weights taken at home
  • Patient's partner indicated the Patients nickname
  • AI produced a CarePlan based on current labs and observations; relative to clinical care guidelines, and care plan definitions.
  • AI produced an Observation interpretation code value

Using Tags

All of the FHIR Resource have a meta.security coding element that has a valueSet binding that includes a set of provenance codes that include a set of codes for this usecase. The fact that these are indicated as .security tags does not mean they are exclusively only to be used for security purposes; and note that security is the domain of managing risks to (Confidentiality, Availability, and Integrity). Provenance comes under Integrity and a bit of Availability.

There are some that will see the .meta.tag. The temptation is strong (especially with those in the AI space); but this is not the right element. This is not wrong to use this element, but putting your code here will mean that those looking in meta.security will not find what they are looking for. So we should agree to use the meta.security and the given standardized codes (when they apply). 

The ValueSet available for meta.security covers all of the security space including Availability and Integrity. Most important here are the Provenance sub valueSet, but you should also note that the Integrity valueSet has some very useful codes (highly reliable, reliable, uncertain, unreliable)

Within the Provenance sub valueSet are codes for the distinction of data being reported by or asserted by:
  • clinician
  • device
  • healthcare professional
  • patient acquaintance
  • patient
  • payer
  • professional
  • substitute decision maker
  • artificial intelligence
  • dictation (software)
I am not all that clear on what the distinction between reporting vs asserting is; nor do I understand the distinction between a clinician and a healthcare professional. I think these distinctions exist in the core codeSystem so that they can be further profiled and made distinct.

So, use these codes at the Resource.meta.security to indicate that the whole resource is contributed by one of those codes. Here is an example of how a whole Observation would be indicated as contributed by AI.



Using Tags at the element level

Where the FHIR Resource .meta.security with the code of patient would be understood as indicating that the whole of the resource was asserted to by the patient; often this is too blunt of a tag. Sometimes one just wants to indicate an element was contributed to differently. Like the example above where the patient acquaintance indicate their nickname. Thus one wants to tag that only that nickname was contributed by the patient acquaintance. For this we use the extension that is defined in the Data Segmentation for Privacy (DS4P), again pointing out that this IG is broader than just data segmentation and/or privacy.

So, here we show an example of a Patient resource where the nickname was contributed by the patient acquaintance. In this case we need to have the inline code at the Resource.meta.security level to indicate that inline codes are used in this Resource, as inline codes can appear on any element, thus it is expensive to look for inline codes.

    {
      "use": "nickname",
      "given": [
        "Jimmy"
      ],
      "extension": [
        {
          "url": "http://hl7.org/fhir/uv/security-label-ds4p/StructureDefinition/extension-inline-sec-label",
          "valueCoding": {
            "code": "PACQAST",
            "system": "http://terminology.hl7.org/CodeSystem/v3-ObservationValue",
            "display": "patient acquaintance asserted"
          }
        }
      ]
    }

Provenance Solution

The tagging solution, even with the element level capability, often can't convey enough information. Like who is this patient acquaintance, when was that element added, why was it added, who was involved in agreeing to add that to the record, where did that data come from, how was the original data used, etc... All those things that are part of Provenance ( Who, What, Where, When, and Why). Now recording all of this will make the database rather full of Provenance data, where as the tag mechanism is very focused and carried fully by the data. But sometimes one does need to know more provenance detail.

In the Provenance, these same codes from above can be used for the various Agent(s), but there are more nuance available in the participation type codeSystem.  So a Provenance that indicates that the whole Observation was contributed by the Patient would look like this.

{
  "resourceType" : "Provenance",
  "id" : "example1",
  "target" : [{
    "reference" : "Observation/obs2/_history/1"
  }],
  "recorded" : "2021-12-07T12:23:45+11:00",
  "agent" : [{
    "type" : {
      "coding" : [{
        "system" : "http://terminology.hl7.org/CodeSystem/v3-ParticipationType",
        "code" : "INF"
      }]
    },
    "who" : {
      "reference" : "Patient/pat3"
    }
  }]
}

Provenance at the element level


Where as the Patient nickname example would look like (Note the use of the `targetElement` extension). There is also a `targetPath` extension where a path can be used.

{
  "resourceType" : "Provenance",
  "id" : "example2",
  "target" : [{
    "extension" : [{
      "url" : "http://hl7.org/fhir/StructureDefinition/targetElement",
      "valueUri" : "n2"
    }],
    "reference" : "Patient/pat3/_history/1"
  }],
  "recorded" : "2021-12-08T16:54:24+11:00",
  "agent" : [{
    "type" : {
      "coding" : [{
        "system" : "http://terminology.hl7.org/CodeSystem/v3-ParticipationType",
        "code" : "INF"
      }]
    },
    "who" : {
      "reference" : "RelatedPerson/f001"
    }
  }]
}

You can, in Provenance.target, use the extension targetElement or targetPath to indicate that just some of the data within a Resource was patient contributed. See examples 1, 2, and 3 -- in Provenance examples - https://build.fhir.org/provenance-examples.html

Conclusion

The .meta.security and Provenance are not exclusively to be set or used by Security. These values might be populated by a Security Labeling Service (SLS), but that service should not overwrite values that have been explicitly set. Yes, they are used by Security, but security also uses many other elements in the resources that many think are only useful for clinical use.