Monday, January 27, 2014

Constrained Vocabulary and Schema are good and needed - But Robustness must rule the longitudinal HIE

Strict schema and vocabulary are persistent hot topics in Interoperability. For example what is the constrained vocabulary that should be used for CCDA documents in the USA? This is an effort of Profiling, or even Profiling-of-a-Profile. Further constraining vocabulary and schema as far as possible, while still providing some value. This effort to constrain vocabulary and schema are helpful in the early days of building an HIE because it helps simplify (KISS). The more simple the interaction, the more likely  it will succeed. However the more simple the interaction the less information can be communicated.



In building an Health Information Exchange (Verb), one needs to start simple, and this is a message built into the IHE message on building an HIE. In this there is a white paper (handbook) that walks an HIE organization through how to do these constraining work found in the IHE Affinity Domain planning kit. This is still a fantastic resource for building your governance, code-sets, and policies; like seen from Connecticut.

I think however that the more critical part of this HIE building project is not in picking a vocabulary and a schema. But rather in defining what is the proper behavior related to metadata and related to content. Specifically what happens when content or metadata doesn’t utilize that vocabulary (e.g. historic information, or from a foreign land). What is the sending responsibility to ‘fixup’ codes? What is the receiving responsibility to be ‘robust’ to deviations? Is there a role for a translation-service? What is the medical-legal meaning of content that has been changed simply to meet some coding restriction?

Using a restricted code system should be guidance, not mandate. Conformance should be measured on ‘creation’ events, not necessarily ‘transmission’. Everyone must be liberal in how they process incoming content. This is fundamental to the success of the Internet and is known as “Postel’s Law” or the “Robustness Principle”. http://en.wikipedia.org/wiki/Robustness_principle

Use-cases like insurance or public-health reporting can get away with this code restriction, as there is little ‘danger’ of a loss of accuracy from a coding translation. This is why it is logical and reasonable for the original intention of HIPAA to define specific and constrained code-sets. This is why it is reasonable for public-health to define a coarse grain vocabulary.

The actual codes maintained in the medical record, the ones that would be used for current and future treatment have not changed and are the code that the doctor or medical-device picked as the best code at the time the code was picked. When making treatment decisions, accuracy is very important. Deviations from original accuracy are not unheard of, but when they happen they are clearly identified as a derivative or a transform or a translation.

XDS has had from the beginning the concept of a restricted c ode-set for metadata, the concept of the “XDS Affinity Domain”. But we always expected the document content to be the original content, unless it was a properly approved “Transform” (a concept also supported by XDS). The dynamic document concept is clearly an exception that could be called out specifically. The codes in the metadata are intended to be ‘meta’, and thus a bit of accuracy loss for the benefit of easier communication (interoperability) is reasonable. This is emphasized very specifically for some metadata, like classCode (the high-level classification of the kind of content), but is also true of more fine grain items like typeCode. Meaning that even typeCode is just a code representing the whole and thus not a complete representation of the whole content. They are both ‘meta’.

Even XDS recognized that this constrained “XDS Affinity Domain” vocabulary will evolve over-time. Meaning as much as you think that you can control the vocabulary today, the future will want to have different constraints. These different constraints can only add concepts. It is possible to deprecate “new use” of old concepts. But the old concepts can’t be forbidden.

It is the concept of ‘forbidden’ that worries me most. Anytime a constrained vocabulary is selected, this ‘implies’ that codes outside that vocabulary are ‘forbidden’. This is an ‘implied’ POLICY. Please don’t make it an implied policy. Please make it an explicit policy, and I suggest that the policy follow the Principle of Internet Robustness; aka Postel’s Law. Be specific in what you send, liberal in how you receive.

When put into the context of a longitudinal record, rather than the context of an instant in time message, the ‘send’ point-in-time is the point at which the content is created, not the point when it is transmitted. Meaning when content is created it should be created using the best vocabulary and schema at that time, and it should be intended to be as conforming as possible. However we must recognize that a document created today, might be needed 10 years from now when the schema or vocabulary have changed. The new rules should not be applied, and any system receiving the content should try as hard as it can to understand the 10 year old content. Sometimes this means that it can’t be fully processed and that the user (clinician) needs to be warned of this.

This is the receive side robustness.

Eating an Elephant -- How to approach IHE documentation on Health Information Exchange (HIE)

Monday, January 20, 2014

FHIR Full Steam Ahead

Update from the HL7 Workgroup Meeting (WGM). Although I am not at the meeting due to a traumatic automobile accident, the FMG did have a meeting that they extended to those of us members that couldn't make the meeting. The main agenda for the meeting was to agree on if we will be targeting a second formal ballot, or go direct to DSTU. The GE pushback has caused much open and transparent discussion among the FHIR community. The FMG took this FHIR community consensus as advisement. There was a very visible survey done at the FHIR Connectathon, as reported to me by Scott Bolte (GE).

After all this deliberation the FMG voted unanimously to go direct to DSTU, providing nothing traumatic comes up this week at the WGM. I will note that the GE position was an urging for a second ballot, while being very clear GE accepts either outcome as long as it is open and transparent.

A few actions did happened as an affect of the open and transparent discussion as a result of the GE pushback. First there will be some letters published openly that explains the way that FHIR is going to utilize the DSTU. These letters will stress that during the DSTU phase there will be no effort to maintain backward compatibility, yet all changes must be justified and persuasive.

Also there will be a formal bug-tracking system that is attached to the SVN that is used to maintain the source for the FHIR specification. Everyone is encouraged to report bugs, membership is not a factor. All bugs will be formally tracked, discussed, and disposed of. The changes to the specification will be linked to the bug.

There will be regular releases of the 'current' specification, with change-tracking generated from the bug-tracking system. Of these releases there will be some milestones that will be saved for longer times, such as the versions used for connectathons.

I am very happy with this result. Did I want a second ballot, yes. But what I really wanted was a open and transparent discussion of the process, with very visible understanding of the current stability and maturity of the specification with go-forward mechanisms and milestones.

Wednesday, January 15, 2014

Excited about FHIR, but want it done right

I am a huge supporter of HL7 FHIR. I am involved in the development, the use by IHE, and promoting it inside of GE Healthcare. I am about as involved in the FHIR standard as I can be, given my day-job. I truly want and expect FHIR to succeed. It is far better in many ways than the existing HL7 v2 or v3. It will break Healthcare out of the dark ages, and into the modern world of Interoperability.

The best way to get to this vision is to make sure that it receives as much review as is possible, without going overboard. Too much review is a bad thing too, as many standards have died due to over analysis. However FHIR has received ONE formal ballot, while benefiting from many people experimenting with it at the various FHIR connectathons (hackathons). I encouraged everyone back in August to review and comment Time to kindle the FHIR - It needs ballot comments to grow. I provided 42 comments, mostly focused on DocumentReference (aka XDS),  during this one formal ballot. I worked with Grahame to resolve these. I am happy that they were given the best review by Grahame as they could.

I however want another chance to review the whole FHIR ballot, and even more so I want everyone that is now more excited than ever to have a chance to review and comment on the whole FHIR ballot. There were hundreds of comments that changed almost every part of FHIR. Most of these changes were done by a very small core team that I have total faith in. I am not concerned that the HL7 ballot process was not executed. I am interested in making sure I and all the newly excited people get a second chance. A second chance to make sure the content is consistently following the FHIR core principles and is reasonable quality.

The DSTU phase is a dynamic phase. There WILL be more changes during the DSTU phase. So I know that a second ballot is not necessary to get convergence, this could happen during DSTU. However the more changes we make during DSTU the less visibility these changes have and the more they will break.

I, on behalf of GE, sent the following message to the FHIR Management Group (FMG), FHIR Governance Board (FGB), and the FHIR mailing list.

---------------------------------------------------------------------------------------------------------------------------------------------
GE Healthcare would like to express our complete support for FHIR. GE has provided comments and have seen these comments resolved to our satisfaction. However we would like to encourage the FMG and FGB to support another formal ballot before entering the DSTU phase. This reasoning is not that reconciliation of any specific votes is not satisfactory, but rather that the overall change by the total votes requires a renewed top-to-bottom examination. The most concerning is where a voter (A) was satisfied with a portion of the original ballot, where that section is changed by voter (B) to something that voter (A) would not agree with. It is simply too hard to watch the total ballot reconciliation and track all changes piecemeal.

The future for FHIR is very bright, and now is the time to make sure that it meets all the principles and uniformly applies them. GE realizes that this extra step is not minimally necessary according to the HL7 GOM. We accept the decision of the FMG and FGB, and will continue to support FHIR regardless of if another formal Ballot is executed or if FHIR enters DSTU directly.
--------------------------------------------------------------------------------------------------------------------------------------------

Please give us a second chance to review and comment on the FHIR content before it enters DSTU.

Saturday, January 4, 2014

Recirculation Ballot of the HL7 Healthcare Privacy and Security Classification System (HCS)

The HCS is being forced through recirculation ballot because two people are objecting in broad terms to any mechanism that would allow for ‘segmentation’ of data. The committee has tried to address their concerns, which are policy concerns and not technical concerns. We did agree to warn those using the HCS of potential harm caused by segmentation. They have refused to withdraw their negative.

The mechanism for dealing with this in HL7 is a recirculation ballot. A targeted ballot to those that participated in the original ballot asking them to consider the outstanding negative ballot comments and either vote Affirmative to override the negative ballot comments, Negative to agree with the negative comments, or Abstain. The details are in the recirculation ballot package.

The concerns are not unfounded, they are just not related to what the HCS is defining. The HCS is a ‘conceptual level’ concept of using broad concept of security-tags to aid with Access Control for Privacy or Security purposes. It is not a ‘platform dependent’ nor ‘organizational policy’. The specific concerns to be considered are (these are in the recirculation ballot package):
  • Data tagging is fragile: I would agree that tagging data is a fragile thing when the tag is conveying current policy. However the HCS is just defining ‘conceptual level’ concept of security-tags, not defining that they must be used or the ‘platform specific’ mechanisms to communicate. Separation of Metadata tags, from Package tags, from Consent Policies is important to robustness and to be agile to policy changes:
    • Metadata – Metadata is descriptions of the data, and only the data. This level of security tag really needs to only describe the data. 
    • Package – The package is the
      abstract concept of the interaction between parties. It would include push or pull interactions. It thus would include something like assertions of who the user that is requesting (pull) data, and under what purposeOfUse. This level of security tag can carry specific obligations about the communication agreement. It would not be duplicative of Metadata, but could be summarization of the Metadata. It could carry obligations related to the interaction (do-not-print), it could carry pointers to consent policies (see next).
      • The unfortunate reality is that the –platform specific -- package level tag carrying is not very mature. Thus Metadata tags often carry these Package tags, or they are part of the overriding policies (e.g. DURSA).
    • Consent Policies – This is an independently managed policy information point (PIP) that holds the current status of patient authorizations (aka Consent).
      This only appears where the parties that are interacting both agree upon one Consent Policy Point. Most of the time a sender and recipient have independently managed Consent Policy Points, as consents tend to be specific to a data-holding organization. There could be a common Consent Policy Point, it would be an independent communications pathway from the package. Most of the time a consent to release is indeed independent of the policy the recipient would need to gain to continue to use or disclose.
  • Fine grain tagging could paralyze medical practice. – I don’t necessarily disagree, but this is the concern of Policy, and specific on fine-grain the CDA internal tagging discussion. The HCS is defining only at the ‘conceptual level’ and not indicating if this is fine-grain or coarse-grain. The HCS has no CDA specifics in it. The CDA specific use of the HCS is part of the DS4P ballot, which is being re-balloted. I have pointed at the use of "Transforms" as an alternate model.
  • LOINC and SNOMED should be used and not a security/privacy specific vocabulary – It is hard to argue that universal and perfect use of these vocabularies would make life easier. Security nor Privacy are going to change that. However from the engine that needs to control security/privacy access, operating on a much smaller subset that represents the rollup from a security/privacy perspective is more efficient and more likely to enforce the right rules. This roll up is done once, rather than at each access (typically). This further supports security/privacy codes as actionable when the content is free-text or graphical or minimally coded. The HCS can also be used in DICOM or IHE standards. 
  • Vocabularies pointed at include some dangerous codes – YES they do. The fact that the code exists does not mean it must be used. Even LOINC and SNOMED have some questionable codes, more so in history. Thus the comment should be directed at policies that would be choosing a value-set from these vocabularies. The HCS is just pointing at existing vocabularies and doesn’t forbid other vocabularies.
  • Rules are regional – this was not mentioned in the negative comments, but has come up. The rules in one region, the sending region, might not be the same as the rules in the receiving region. Thus presuming that the proper thing will be done will fail. See Rob’s excellent article on this. http://fairhaven.typepad.com/my_weblog/2013/12/confidentiality-code-use-cases.html
Could the HCS be made better? A standard can always take on some improvement. I think this one is in good enough shape for now. As we use it, we can revise it.
  • The HCS is predicate work to the DS4P ballot. The DS4P ballot is being re-balloted.
  • The HCS is being referenced in IHE as a way of using the multi-valued metadata entry – confidentialityCode. 
  • The HCS is being referenced in FHIR
If you were involved in the original HCS ballot, when the recirculation ballot opens on Monday, please set your vote to Affirmative. 

More articles:
Patient Privacy controls (aka Consent, Authorization, Data Segmentation)
Access Control (Consent enforcement)