Friday, February 7, 2025

Did something change in the IG I am using?

 IHE and HL7 are writing and revising Implementation Guides at a fervent rate. There are other organizations also writing and revising Implementation Guide, some are regional IHE or HL7 organizations, and many more.  Everyone that writes an Implementation Guide (IG) desires to create a perfect and fully comprehensive specification on the first try. However, that is simply not realistic, and any IG that has had only one version is most likely one that no one is using.

Two very important standards organization mechanisms are critical to the achieving a perfect IG. 

  1. Clear indications of the changes that were made and why.
  2. Method for the users of the IG to submit comments and improvement requests.

How do you know what has changed?

Within IHE we try hard to produce a human friendly listing of the changes that have happened at each version. These would not include inconsequential changes such as typos. This likely just summarizes some change (e.g. "added examples").  You will find these on the main page (index.html) of the IG, in a pink Note section

You can also get to all the historic versions through the "Directory of published versions" link found on the top of the main page (index.html) in the bright yellow box. On the history page you can find each historic version, and the above mentioned changes should also show up here.


The next level of details is to look at all the closed github issues (or in the case of HL7 the closed jira tickets). Select the "Issues" link in the footer, and navigate to closed issues. Sorry, not going to try to explain github issue tracking here. If you know how to use it, then you know.

If you are really interested in technical conformance resource changes, this also available, although not as easy to find. This you must first go to the footer of the IG, and select "QA Report"

On the QA Report, there is a section "Previous Version Comparison" that will give you very detailed computer generated differences.



How do I submit a comment?

Standards live by comments; they really are the food that makes standards useful. So please submit a comment anytime you have one. Best time to comments is during Public-Comment, as we are planning on addressing comments at that time with the intent to resolve all of the public-comments received. 

Comments can be identifying a typo or bug, something that is not clear to you, something you and a peer argue about, or something you would like the IG to do. All comments are welcome.

You can comment at any time, even after Public-Comment when the IG is Trial-Implementation, or even when it goes Final-Text (normative). Within the Implementation Guide you want to comment on there are two ways. The first way is to submit a github issue, this is the first red circle. The second yellow circle is "Propose a change" which is a web form that anyone (member or not) can use.

Conclusion

As a specification gets more normative, it will change less. In theory normative status (Final-Text) the specification will not get a change that breaks any system that used the previous. But the status of a specification should never stop you from submitting a comment at anytime.


Tuesday, February 4, 2025

AI privacy controls

AI and privacy are hot topics lately. I have effected some changes in HL7 as well as blog articles about those capabilities that exist. I am also a participant in a few AI initiatives in HL7 and within the VHA. These solutions are most well developed in FHIR, but are fundamental provenance, data tagging, and Consent so can work with most standardized data and datasets.

The main topic areas in AI:



1) Can data be used to train an AI?

Given that there are some data that should be authorized to be used to train an AI, how does one indicate rules that enables some data to be used, while forbidding other data to be used to train an AI?

This needs to be done at the whole dataset (e.g. EHR) level, where one might want to forbid some subset of the data from the teaching. 

This also is needed at the patient specific Consent level. So that a patient can choose to not have their data included.

2) How to indicate the data that was used to train an AI model?

    Once you have an AI Model, it is important to keep track of what data was used to train that AI Model. This enables knowing what data are used to teach the AI, thus if a concern comes up one knows if that concern impacts your AI model or not.

    Provenance of the data used to create the AI Model -- https://healthcaresecprivacy.blogspot.com/2024/01/provenance-use-in-ai.html

    3) How can the use of "my data" in an AI decision can be controlled?

    How the consent can allow/deny that the patients data can be used in a clinical decision on behalf of the patient outcome (no article yet)
    • When an AI retrieves patient specific data to enable payment decisions or treatment decisions, those data accesses use a very specific PurposeOfUse. This enables policy to enable or deny that access. The AI accessing the data is independent from a person accessing for payment or clinical purposes.
    • PurposeOfUse: PMTDS (when the AI is acting to aid with payment decisions)
    • PurposeOfUse: TREATDS (when the AI is acting to aid with clinical decisions)
    • If there is no rule that use these PurposeOfUse values, then their parent term (payment or treatment) takes precedence.
    • These can be used in a Consent for patient specific rules, or at Permission for an organization rule. Realistically both, as the overriding policy must be stated so that Consent can accept and/or override it.

    4) How to indicate data is the output of an AI?

      Once an AI is used to produce a decision or recommendation. How is that recorded into the dataset (e.g. EHR) so that future uses know that it came from AI vs came from a Clinician or other professional. This is simply provenance, enabling knowing where data came from.

      On how data can be tagged as having come from an AI - https://healthcaresecprivacy.blogspot.com/2024/09/healthcare-ai-provenance-of-ai-outputs.html 
      • Tagging can be at the data resource or element level 
      • Tagging can be by simple security tag 
      • Tagging can be full Provenance for more details
      An important attribute of the Provenance is to know what version of the AI was used, what Model was used, and what data inputs were given (what portion of the patient's chart was input).

      Conclusion

      We have a lot more than most people that start to talk about AI think we have. I am sure it is not everything we need, but I want to help encourage the use of the methods we already have before we reinvent the wheel.