Friday, February 7, 2025

Did something change in the IG I am using?

 IHE and HL7 are writing and revising Implementation Guides at a fervent rate. There are other organizations also writing and revising Implementation Guide, some are regional IHE or HL7 organizations, and many more.  Everyone that writes an Implementation Guide (IG) desires to create a perfect and fully comprehensive specification on the first try. However, that is simply not realistic, and any IG that has had only one version is most likely one that no one is using.

Two very important standards organization mechanisms are critical to the achieving a perfect IG. 

  1. Clear indications of the changes that were made and why.
  2. Method for the users of the IG to submit comments and improvement requests.

How do you know what has changed?

Within IHE we try hard to produce a human friendly listing of the changes that have happened at each version. These would not include inconsequential changes such as typos. This likely just summarizes some change (e.g. "added examples").  You will find these on the main page (index.html) of the IG, in a pink Note section

You can also get to all the historic versions through the "Directory of published versions" link found on the top of the main page (index.html) in the bright yellow box. On the history page you can find each historic version, and the above mentioned changes should also show up here.


The next level of details is to look at all the closed github issues (or in the case of HL7 the closed jira tickets). Select the "Issues" link in the footer, and navigate to closed issues. Sorry, not going to try to explain github issue tracking here. If you know how to use it, then you know.

If you are really interested in technical conformance resource changes, this also available, although not as easy to find. This you must first go to the footer of the IG, and select "QA Report"

On the QA Report, there is a section "Previous Version Comparison" that will give you very detailed computer generated differences.



How do I submit a comment?

Standards live by comments; they really are the food that makes standards useful. So please submit a comment anytime you have one. Best time to comments is during Public-Comment, as we are planning on addressing comments at that time with the intent to resolve all of the public-comments received. 

Comments can be identifying a typo or bug, something that is not clear to you, something you and a peer argue about, or something you would like the IG to do. All comments are welcome.

You can comment at any time, even after Public-Comment when the IG is Trial-Implementation, or even when it goes Final-Text (normative). Within the Implementation Guide you want to comment on there are two ways. The first way is to submit a github issue, this is the first red circle. The second yellow circle is "Propose a change" which is a web form that anyone (member or not) can use.

Conclusion

As a specification gets more normative, it will change less. In theory normative status (Final-Text) the specification will not get a change that breaks any system that used the previous. But the status of a specification should never stop you from submitting a comment at anytime.


Tuesday, February 4, 2025

AI privacy controls

AI and privacy are hot topics lately. I have effected some changes in HL7 as well as blog articles about those capabilities that exist. I am also a participant in a few AI initiatives in HL7 and within the VHA. These solutions are most well developed in FHIR, but are fundamental provenance, data tagging, and Consent so can work with most standardized data and datasets.

The main topic areas in AI:



1) Can data be used to train an AI?

Given that there are some data that should be authorized to be used to train an AI, how does one indicate rules that enables some data to be used, while forbidding other data to be used to train an AI?

This needs to be done at the whole dataset (e.g. EHR) level, where one might want to forbid some subset of the data from the teaching. 

This also is needed at the patient specific Consent level. So that a patient can choose to not have their data included.

2) How to indicate the data that was used to train an AI model?

    Once you have an AI Model, it is important to keep track of what data was used to train that AI Model. This enables knowing what data are used to teach the AI, thus if a concern comes up one knows if that concern impacts your AI model or not.

    Provenance of the data used to create the AI Model -- https://healthcaresecprivacy.blogspot.com/2024/01/provenance-use-in-ai.html

    3) How can the use of "my data" in an AI decision can be controlled?

    How the consent can allow/deny that the patients data can be used in a clinical decision on behalf of the patient outcome (no article yet)
    • When an AI retrieves patient specific data to enable payment decisions or treatment decisions, those data accesses use a very specific PurposeOfUse. This enables policy to enable or deny that access. The AI accessing the data is independent from a person accessing for payment or clinical purposes.
    • PurposeOfUse: PMTDS (when the AI is acting to aid with payment decisions)
    • PurposeOfUse: TREATDS (when the AI is acting to aid with clinical decisions)
    • If there is no rule that use these PurposeOfUse values, then their parent term (payment or treatment) takes precedence.
    • These can be used in a Consent for patient specific rules, or at Permission for an organization rule. Realistically both, as the overriding policy must be stated so that Consent can accept and/or override it.

    4) How to indicate data is the output of an AI?

      Once an AI is used to produce a decision or recommendation. How is that recorded into the dataset (e.g. EHR) so that future uses know that it came from AI vs came from a Clinician or other professional. This is simply provenance, enabling knowing where data came from.

      On how data can be tagged as having come from an AI - https://healthcaresecprivacy.blogspot.com/2024/09/healthcare-ai-provenance-of-ai-outputs.html 
      • Tagging can be at the data resource or element level 
      • Tagging can be by simple security tag 
      • Tagging can be full Provenance for more details
      An important attribute of the Provenance is to know what version of the AI was used, what Model was used, and what data inputs were given (what portion of the patient's chart was input).

      Conclusion

      We have a lot more than most people that start to talk about AI think we have. I am sure it is not everything we need, but I want to help encourage the use of the methods we already have before we reinvent the wheel.

      Friday, December 13, 2024

      IHE Updates for PCC and ITI

       Updated Releases:

      Public Comment

      Formal Announcement

      See all at https://profiles.ihe.net

      Some Explanations

      Most of these releases are incremental updates that don't require much comment. Either addressing the public-comment from last quarter or integrating formal Change Proposals (CPs) that were balloted and passed. If you need to understand these changes, there is traceability within the representative IHE GITHub repositories. All issues and CPs are indicated with individual github issues and pull-requests. 

      The biggest news is:

      with an important mention of:
      This is mostly a conversion to IG publication of the QEDm profile that has before this been only published in PDF form. In PDF form, the supplement was nothing but narrative. There was some conformance resources but there was little assurance they were right. Now that QEDm is published in IG form there is a full set of conformance resources, and examples. There is a clear connection to MHD using mXDE

      The future of QEDm is still in play. We are intending to adjust QEDm to be a derivative of the HL7 IPA. In this way there will be little mismatch between the two.  This effort will need to address that IPA includes functionalities that IHE covers with PDQm and MHD. I expect this might be ready in Summer 2025, assistance is always welcome.

      DSG - JSON signature option

      This is an addition of an Option to the normative Document Digital Signature. Original Profile used XML-Signature, as that was the best solution at the time. However, there is more tooling support for JSON Digital Signatures now days, and less interest in XML-Signature. So IHE adjusted the DSG profile to have two options, the original XML-Signature (which is assumed if no option is declared), and the JSON Digital Signature. The use-cases supported have not changed, the DSG is about Document Digital Signature and has some variations for the various ways that Documents can be moved around using IHE Document Sharing.

      Finance and Insurance Services

      This supplement is a new domain for IHE. Although there is a robust community in the USA that is profiling the FHIR standard to support Finance and Insurance Services; there is a need outside the USA for similar profiling. This is the scope of this supplement, mostly "Not the USA". This does not mean it is conflicting with USA needs, but rather to indicate that the intended audience is everyone outside the USA.


      The current profiling is not all that much different than the use-case analysis found in the FHIR Core for the Finance profiles but does define some capabilities that have been identified by some open-source implementations. The initial deployment is expected to be by WHO. I would expect we will receive robust comments as this gets "Trial Implemented", which is an admission that this IG is rather immature and open for discussion.

      Scheduling

      This IHE profile is based on the Argonaut Scheduling Implementation Guide, originally published back in the FHIR STU3 days. Argonaut has agreed to hand over the FHIR R4 and future to IHE. Thus, this IG is mostly a conversion to FHIR R4, but includes significant improvements based on experience.

      The following are some of the major differences from the Argonaut IG:The IHE Profile is based on FHIR R4
      • The IHE Profile is intended for international use, and it does not have required bindings or any dependencies to national profiles
      • The operations described are $find, $hold, and $book
      • A separate transaction describes the use of FHIR Search for the Appointment resource
      • The operation parameters use explicit data types, and support only POST transactions

      Friday, November 1, 2024

      De-Identification as a Service

      I have had some conversations lately around a De-Identification Service, specifically if it is possible for a general service that could be used like Actors within IHE. The problem that I have historically came up with is that there is no standard for defining de-identification policy, that set of rules that would drive the de-identification process in a way that (a) protects against re-identification, and (b) provides sufficient detail in the resulting dataset for a (c) given purpose. 

      There are standards on the concept of De-Identification, and I have written articles on the process. Key to any discussion on De-Identification is to recognize that it is a process, it is not an algorithm. De-Identification is not like Encryption, or Signatures for which one can have a defined algorithm. This because De-Identification is trying to balance opposing forces: The appropriate use of the data that needs specific fidelity to the data, against the inappropriate re-identification of the subjects of the data whose privacy must be protected.

      IHE has defined a "De-Identification Handbook" that speaks to how to go about defining a De-Identification Policy, and addresses why this is something that is a process. This handbook helps you identify what parts of your data are direct identifiers and what are indirect identifiers. It identifies some common ways to change data during the de-identification process, such as redact, generalize, fuzz, replace, etc. The handbook also covers how to assess your dataset to see if your choice of policy is sufficient.

      I have a general orchestration diagram in my Security and Privacy Tutorial - http://bit.ly/FHIR-SecPriv 


      This diagram is very abstract, presuming some kind of Query can be done by some Research Analytics App, that can be mediated by a De-Identification Service which if the request is authorized and appropriate can forward the request to a Resource Server. The Resource Server responds with the full fidelity data, the De-Identification Service mediates and de-identifies the data before returning the results to the Research Analytics App. This generalization presumes alot, including that the query can be mediated like this, and that the results can be de-identified in-real-time. Most De-Identification is done on a dataset, so that the resulting dataset can be analyzed to see that it has indeed met the goal of de-identification, often using an algorithm like K-Anonymity. The above could be done, but is far more of a systems design task, and not as simple as shown.

      I think a more likely is that De-Identification Service orchestration is on a PUSH or FEED of data. That is not to say that it might not be a Query, but rather that it is a BULK of data. So, for example the FHIR Bulk Data Access might work.  So, for this let's take a generic push set of Actors and Transaction. 


      In this diagram there is a data source and a data recipient and some standards-based transaction between them. 

      We then insert our De-Identification Service in between by Grouping a Data Recipient with our De-Identification Service and by also grouping a Data Source. Thus, the original two actors, are now end-to-end talking, but they are talking to each other with an intermediary.



      We then recognize that the de-identification policy needs to be available to the De-Identification Service and must be administered by some Policy Admin


      Unfortunately, I don't know of a Standard that exists for De-Identification Policy. So, these two actors can't really be defined. They need to be some functionality inside of the De-Identification Service.

      So, this is the diagram I come up with. This is more than what I discussed above, as it starts with Document based sharing, and ends up with De-Identified FHIR Rest queries. Thus, the data is feed into the De-Identification Service (MHD), but that De-Identification Service groups a bunch (mXDE) of other IHE profiles and ultimately provides access to the De-Identified data using FHIR Rest (QEDm). This diagram does not abstract out the policy, it is part of the systems design.



      I have used MHD and QEDm in this example. But given that I simply grouped within the De-Identification Server the peer Actor from those transactions; then the external view of the De-Identification Server is that it is using MHD and QEDm standards; essentially magic happens inside.

      Similar can be done with other standards. This left as an exercise to my reader.


      Wednesday, September 25, 2024

      Is honoring a Patients Consent a form of forbidden Information Blocking

      As I work hard to enable a patient to express the privacy rules around how their health information can be used and by whom for what reasons; I hear that there is worry that an organization that honors those wishes by blocking the data for a given use, that this Organization may be seen as violating the regulations forbidding Information Blocking.

      In HTI-2 there is some discussion around some sensitive data that has been expressed as being a special case. However, this is just one kind of data that is or might be considered sensitive by a patient.


      My concern is wider than just the ONC HTI-2 and the USA Information Blocking regulations. There are other state level regulations that might force data to be shared in circumstances for which the patient does not want to share. This is not to say I am against some required reporting, but to recognize that there is a wider overlap between potential sensitive classes of data and unreasonable expectations to mandate data sharing.

      I am a fan of defining classes of data that are sensitive, that are generally stigmatizing health topics. These defined classes need a specific and actionable definition, so that it is clear to all what is within that class and what is not within that class. This is important to be sure policies work together when bridged. The reality is that these classes are not as distinct as we would like, but today they are hardly even given names of the classes.

      One class that is discussed is sexual health topics; which seems clear but is not clear at the detail and technical level. 

      The Patient should be empowered to define what is sensitive to them. The use of sensitive classes of data should be a starting point, but the patient should also be allowed to restrict data within a timeframe, or data associated with a specific treatment episode/encounter, or even to identify specific data by identifier.

      When these complex Consents can be implemented by an organization, and that organization allows more refined Consent provisions; then these restrictions should not be seen as a forbidden Information Blocking. We should not be questioning the patient's choices.

      Tuesday, September 24, 2024

      Healthcare AI - Provenance of AI outputs

      AI is the focus of the HL7 Workgroup Plus meeting this week. As I sit in on the presentations, I find that there are some efforts that the Security WG has already put in place that are not understood. So this article will expose some of the things that Security WG has already put in place to support AI.

      AI Output Provenance

      First up is that there is a concern that any diagnosis, notes, observations, or other content that is created by AI, or assisted by AI, should be tagged as such. With this provenance any downstream use of the data or decisions are informed that the data came from an AI output.

      An important aspect of this is to understand the background of the data, the Provenance. This might be a positive aspect, or might be seen as a drawback. The Security WG is not trying to impugn or promote; we are just wanting to provide the way for the data or decision to be tagged appropriately.

      There are two methods.

      Provenance Tag

      There is a data tag that can be applied to any data to indicate that it came from AI.

      AIAST - Artificial Intelligence asserted  --- Security provenance metadata observation value used to indicate that an IT resource (data, or information object) was asserted by a Artificial Intelligence (e.g. Clinical Decision Support, Machine Learning, Algorithm).

      This might appear on the top of the FHIR Resource in the .meta.security

                 "resourceType" : "Condition",
                 "id" : "1",
                 "meta" : {
                    "security" : [{
                      "system" : "http://terminology.hl7.org/CodeSystem/v3-ObservationValue",             
                      "code" : "AIAST" }
                      ]
                    },
                 ... other content etc.....
               }
       

      This can also be used using the element level tagging defined in the DS4P - inline security labels
      Using this would cover a DiagnosticReport that has one .note element that is the output of an AI analysis of the data. The DiagnosticReport would indicate that there is an inline label, and just that one .note would be tagged as being AI Asserted.

      Non-FHIR - The AIAST code is available for use elsewhere. Such as in HL7 v2, CDA, DICOM, and IHE-XDS. As a code it is very portable. These other standards include ways of carrying security tags, and thus this AIAST code.

      Provenance Resource


      The Provenance resource would be used when more than the tag is needed. This Provenance would take advantage of the AIAST tag, to indicate that the purpose of this Provenance is to indicate details about the AI Assertion.

      The above Provenance Tag might still be useful to use, with the Provance Resource providing the details of the provenance of that assertion.

      The Provenance Resource might also use the target element extension or target path extension. to point at the specific elements of the target resource that came from AI Assertions.

      The Provenance Resource can also indicate the specific AI algorithm using a Device resource. In this way one can understand the revision of the AI that was used. Possible that if there is then determined to be a problem (bias) with that version of the AI model, one can find all the decisions that were recorded from it. This might also include parameters and context around the use of the AI algorithm.

      The Provenance Resource can indicate the data from the patient chart that were considered by the AI algorithm.

      The Provenance can also indicate other traceability, such as what portion of the AI model were used.

      As with any Provenance, the other elements can be filled out to provide details on when, why, where.

      AI use of Provenance

      AI will often look at a patient record to determine a NEW diagnosis or write a new note. These interactions by AI should be aware of data that has the AIAST tag, so that the AI can distinguish data that has been entered as new, from data that was derived by previous AI use. This is often referred to as “model collapse” or “feedback loops.” One possibility is that AI will ignore any data or data elements previous authored by AI.

      Tuesday, September 3, 2024

      Speaking at free #HL7 #FHIR #HealthIT #Cybersecurity Event


      Excited to announce that I'll be speaking at the HL7 FHIR Security Education Event on September 4-5! This virtual event is packed with insights and discussions tailored for everyone in the health IT community.

      Two Tracks to Choose From:

      1. General Track: Perfect for those looking to deepen their understanding of FHIR security without getting too technical.
      2. Developer Track: Designed for health IT architects, developers and engineers who want to dive into the details.

      Join me and other experts as we explore the latest in FHIR security. Don’t miss out on this opportunity to enhance your knowledge and network with fellow professionals!

      Register free at: https://info.hl7.org/hl7-fhir-security-education-event-0

      #FHIR #HL7 #HealthIT #Cybersecurity #FHIRSecurity