Wednesday, March 21, 2018

Blockchain as a platform for Supply Chain

I went to a Supply Chain Management on Blockchain panel event at Marquette University last night. The recording of the session is available. It was on the topic of using Blockchain to manage provenance of supplies throughout the Supply Chain. This was on the general topic of supply chain, not specific to healthcare, or the most specific example of drug components in healthcare. However the general concept of supply chain use of blockchain is directly transferable into drug supply. I might argue that drug supply chain likely has more opportunity to take advantage of blockchain, and more money available to fund that kind of research leading to an implementation.
Blockchain, keeps fidgeters occupied, not bothering others

The talk was mostly about the general concept, general need, and generally how blockchain would help. Nothing shocking, but it is useful to hear it from people who have done it. The panelists were Dr Mark Cotteleer from Deloitte, Chris Kirchner of Slync, and Paul Biwer from Biwer & Associates. The guest speakers were from organizations involved in actual use (pilot) in the Food supply-chain, and in package shipping.

The first caution was that the larger the network of supply chain participants, the more complex; the more complex the harder to make a working system. Hence the very narrow usecase focus in the package shipping usecase to track packages given into their control. In this case the speaker indicated there were about 5 participants, so I am guessing these would be a few out-sourced transport or handlers.

What data should go into Blockchain vs managed classically?

I spoke to them afterwards. Given that they have experience, I asked if any patterns have emerged as to what data actually goes into the blockchain, vs what data is managed in classic ways. Looking for patterns from experience. I got a very strong answer: "As little as you can get away with goes into the blockchain". One reason given was that the more you put there, the more fragile the system is to improvements. Once data is in the blockchain, it is there forever. It is far easier to later add an element of data that wasn't previously shared. But once something is there it is permanently there.

Permissioned blockchains

They are both using Permissioned chains, where a small subset are authorized to add to the chain, and where a subset of them have validation and counter-signature responsibilities. The validation requirements are use-case specific, and data accessibility specific. Where little data is on the chain, little can be validated. Where that data approaches nothing, the only validation is the date/time stamp freezing that data in time. So clearly there is a balancing task of putting enough into the chain to make validation valuable, but not so much as to make the solution fragile or dangerous.

Danger of public accessibility

I then approached the topic of negatives. That is to say, the blockchain use-case will work fantastically when everything is running smoothly and everyone does their duty properly; but the reason a blockchain is used is because at some point someone will slip up and something bad will happen. It is at this point that the blockchain is used to determine from something bad happening, back to who caused that bad thing. Without this need, there is no need for blockchain. So, it by definition will happen.

IF this investigation is done in public, the participants will all get very nervous, possibly deciding to stop participating. I point out that the FDA does these kinds of investigations, and 'almost always' does them in secret. This because the bad actor usually has made a mistake, and not done something malicious. A penalty from the FDA gets them to change their ways, without scaring everyone. Or scaring the whole public. The FDA choses some cases to make public, usually due to malicious intent, or to 'make an example' out of a big player. With blockchain, especially public accessible, everyone can do the homework to determine who the bad actor; thus it will be very publicly known… and all the good actors will start to question if they should participate, as everyone knows that they will slipup at some point…

The answer I got was… hmmm, never thought of that…

The big Aha moment

In the discussions I got an insight that for some reason had escaped me till that point... For any specific use-case, like Supply Chain Management, one does not necessarily have just one chain. I had been thinking that all of the blockchain features that the use-case would benefit from should / would be provided by one blockchain. But this is not necessarily the best approach. The specifics were that one need is to track the flow of supplies, where another need is to know that there is a confirmed demand for that specific supply. These interact in that the supply should not be shipped without a confirmed purchase of that supply. But both functions don't need to be on the same blockchain. There are actors that should have access to the flow of supplies, and they are slightly overlapping Venn diagrams with those that should have access to the confirmed purchase facts. Only a few of these actors belong in both camps, most are not.

I think in the healthcare use-case of using Supply-Chain Management specifically of drug components likely has similar needs/functionalities that would benefit from Blockchain, but are exclusive audiences.

Some actors might be in the Privileged group for tracking movement, but only readers of the other use-cases. Some might not have any visibility into the purchase chains...   

Simple concept once it is laid out... 


  1. Start with a use-case that has fewest actors to keep it as simple as possible. You can always add actors later.
  2. Design each blockchain to handle the right audience and active participants. You can always add more functional chains later.
  3. Put as minimal data into the blockchain as you can get away with for the use-case need. You can always add more data elements when they are found to be critical.
  4. Augment blockchain functionality with classic data management methods. Don't abuse the blockchain, use it carefully.
  5. Think through abuse scenarios up front. Happy-path is nice, but not why you are designing to use blockchain.

Monday, March 19, 2018

FHIR really was positively different

I had a short but very satisfying interaction with a developer at HIMSS 2018. They had implemented a pilot project using FHIR. Their use-case was to instrument the DoD systems with a FHIR Server API, and similarly instrument a VA Vista system with a FHIR Client. The goal was to show how providers at the VHA could see the DoD data while using the Vista experience they are familiar with. 

They found that adding a FHIR Server API in the front of the DoD system to be quite achievable. 

They found that placing a FHIR Client API behind an instance of a VHA Vista to be quite achievable. I spent a bit more time to understand this, as I have been working within the VHA for over a year. What he actually did was stand up a new instance of Vista. It should be noted that each VHA site has their own instance of Vista. Vista is an open-source project. So it is easy to stand up your own instance of Vista. What he did differently is that rather than have a data-base under that Vista instance, he placed a service that implements FHIR Client. Thus as Vista would want to hit its own database, he would intercept that database request, and he would make the equivalent FHIR API call, providing the response to FHIR Client request as the database response. I suspect he did some optimization, but you get the picture.

He had this fully working, and it worked very fast. A VHA user could interact with this instance of Vista just as if it was their own site data. The interaction, User Experience, was identical to what they are used to.

Knowing that VHA might be switching over to Cerner, and knowing that Cerner has a FHIR sandbox available... He directed his Vista FHIR Client to speak to the Cerner sandbox FHIR Server. With only making the endpoint configuration and security token settings; he found that his Vista instance worked almost flawlessly. This system was not designed to work with Cerner FHIR Server... BUT... because FHIR is actually delivering on Interoperability, by being simple and well defined, the system just worked.

When I presented that I am part of the FHIR standards development team, he wanted me to know how overly excitedly happy he was at this experience. He expressed that he has had long experience with networking standards, including HL7 v2, CDA, and others. He wanted me to know that "FHIR really was [positively] different."

I have no idea what will happen with this pilot. It was not part of the VHA Lighthouse project. It was also not part of the FHIR work going on with MyHealthVet (The project I am now assigned).

Friday, March 2, 2018

FHIR Consent Resource mapping to Kantara Consent Receipt

I really like the work that Kantara is doing with Consent Receipt. I think they are doing what is needed. Specifically they are not trying to define an internal consent resource, nor one that would go from one data controller to another data controller. They are focused on giving the Individual something (a receipt) that is evidence of the Consent Ceremony, and contains the terms agreed to. In this way, the Individual has evidence that can be used later when their consent terms have been violated. Much like a retail receipt is used by a consumer when the thing they bought turns out to be broken or defective.

The diagram here is the Kantara Consent Receipt

Perspective difference between FHIR and Kantara: 

The FHIR Consent is shown here

The Kantara Consent Receipt is intended to be a self-contained message, where the FHIR Consent is one Resource to be used within a FHIR infrastructure.   The FHIR Consent is just focused on the consent specifics.

Thus to create a complete equivalent one would need to assemble from FHIR:

Bundle { MessageHeader(1..1), Consent (1..1), Provenance(1..1)}

  • Bundle assemblies everything together
  • MessageHeader explains to whom the message is going, and from who it originates
    • I am assuming a pushed message, but FHIR clearly can be used RESTfully, or a Document could be created.
  • Provenance carries the proof of origination (signatures)
  • Consent specifics

Mapping FHIR Consent to Kantara Consent Receipt.

FHIR ConsentKantara Consent Receipt
    identifier4.3.5 Consent Receipt ID
    status(N/A - would be active)
    scope(N/A - would be fixed at privacy-consent)
    category4.5.5 Consent Type
    patient4.4.1 PII Principal ID
    dateTime4.3.3 Consent Timestamp
    performer4.4.3 PII Controller
4.4.5 PII Controller Contact
4.4.6 PII Controller Contact
4.4.6 PII Controller Address
4.4.7 PII Controller Email
4.4.8 PII Controller Phone
4.4.9 PII Controller URL
4.4.4 On Behalf
    organization4.4.3 piiControllers (including all contact information)
    source[x]4.7 Presentation and Delivery
        authority4.3.2 Jurisdiction
    policyRule4.4.10 Privacy Policy
    provision4.5.1 Services
        period4.5.9 Termination 
            role4.5.10 Third Party Disclosure
            reference4.5.11 Third Party Name
        action4.5.2 Service
        securityLabel4.5.12 Sensitive PII
4.5.13 Sensitive PII Category
        purpose4.5.3 purposes
4.5.4 Purpose
4.5.5 Purpose Category
4.5.8 Primary Purpose
        class4.5.7 PII Categories
        code4.5.7 PII Categories

Not well mapped:

I am pleased and very surprised at how well these map. The following items are where there was differences. These differences seem reasonable given the purpose of each, and capabilities of the environments.

The following items from the Kantara Consent Receipt do map, but not perfectly.
  • 4.3.4 Collection Method - a description of the method by which consent was obtained
    • for FHIR, the current presumption is that the data is collected during treating the patient for healthcare reasons. This current presumption is likely not going to be true as FHIR matues
  • 4.5.8 Primary Purpose -- indicates if a purpose is part of a core service of the PII controller
    • Seems to be a way to differentiate primary purpose from secondary. 
    • FHIR Consent addresses purpose of use regardless of primary or secondary
  • 4.5.9 Termination - conditions for the termination of consent. link to policy defining how consent or purpose is terminated.
    • FHIR Consent has timeframe to automatically terminate, but does not address how the patient would take action
There are a few additional capabilities of the FHIR Consent that are not yet represented in Kantara
  • verification -- these elements are there to hold who verified the consent ceremony. I am not convinced that this is commonly needed. 
  • dataPeriod -- often a patient is willing to allow some data to flow, but might want to block data from a specifically sensitive period of time. The timeframe is an easy thing to identify, and to enforce.
  • data -- FHIR we can point at exactly which data is controlled by this rule
  • nested provisions -- FHIR Consent can defined nested provisions. Thus enable this, but not that...

Thursday, March 1, 2018

Big audit entries

The ATNA audit scheme has been re-imagined in FHIR as the AuditEvent Resource.
The reformatting is only to meet the FHIR audience expectations for readability. For this there is really useful datatypes, structure, referencing, and tooling. There is no intention to change in any fundamental way. There is a mapping between the two that is expected to translate forward and backward without loss of data. The reality is there might be some cases where the mapping might be lacking....

Small entries are large

One of the observations many make about ATNA and AuditEvent is that the schema itself makes what could be recorded in classic log file using a simple unstructured string of about 115 character. The following example comes from the examples in the FHIR AuditEvent for an Accounting of Disclosure Log Entry,
Disclosure by some idiot, for marketing reasons, to places unknown, of a Poor Sap, data about Everything important.
becomes a 4604 character XML object  or a 4156 character JSON object (Hmm, json is smaller, but not by much).

THIS is a ridiculous example, as the string clearly is not sufficient, but the point I do want to make is that adding structure will make the space needed to be larger.

This is a tradeoff that is simply a fact of the difference between unstructured strings, and a structured and coded object. The string might be useful, but often needs special processing to get the data embedded in that string. More often in a string world, on an log analysis must correlate many log entries to get the full story.

The point of ATNA and AuditEvent is that the original record knew exactly the values of Who, What, Where, When, Why, How, etc... so the goal of ATNA and AuditEvent is to provide well defined ways to record this so that it doesn't need to be guessed at.

So reality is that an ATNA or AuditEvent is likely larger than a string... but most 'happy path' audit log entries are 1-2 k in size. Not small, but also not big.

Big log entries

The problem is that there are occasionally cases, failure-modes, where more information would be useful to be recorded. Such as when there is a technical failure, one might want to record the 'stack trace'. Or when a request is rejected, one might want to record more fully the request details and response error message. 

Or some want to record the results of a Query, something I caution against as it fills the audit log with data that is easily re-created.  Often these results are saved in other databases locally, so in that case just link the AuditEvent with that database entry. This could be done by just putting a database index into a AuditEvent.entity.

So sometimes there is a need to record a big amount of data along with your audit log entry... so, how should this need be handled?

FHIR offers an interesting solution. The Binary resource. That is to say you put the big blob into a Binary, and have the AuditEvent point at that Binary. There is an additional feature of Binary that is useful to identify the security that should be applied to this Binary instance, the Binary.securityContext can point at the AuditEvent instance.

More about FHIR and Audit Logging

Wednesday, February 21, 2018

Maturing FHIR Connectathon without confusing the marketplace

Grahame, being the fantastic Product Manager for FHIR that he is, is asking the FHIR community for input on how FHIR Connectathon should evolve. I started to write a few lines but realized that I had more to say than a few lines. (yeah, I know... blah blah blah)

IHE has been doing Connectathons for almost 20 years (First was in 1999 with Radiology). IHE did NOT invent the concept of Connectathon. I was involved in TCP, IP, UDP, NFS, TELNET, and FTP connectathons back in the late 1980s. They were almost exactly the same kind of events.   I have a detailed article on what a Connectathon is, and is not... please review it - What is Connectathon?  I have also written about how nice it is to see FHIR Connectathon changing.

I think IHE and FHIR need to be as distinct as possible, But clearly there will be overlap. Each holds a unique position today that those of us that are involved in both see clearly. However the outside world finds it hard to differentiate already. This perspective to the outside world should be seen as a very important factor. If the consumers of our standards and connectathons don't understand the value, or are confused by it; then it is not valuable or clarifying.  

This does not mean that the overlap should be avoided, it should just be deliberate and clearly communicated.  So far, FHIR Connectathon has been more of a 'hackathon', and that has been exactly what FHIR community needed. The value today: very quick (agile) testing of the specification, proving ground for app development ideas, safe place to share ideas and push oneself. A critical part of this success is that it is short (1.5-2 days), very inexpensive (compared to IHE Connectathon), very informal (compared to IHE Connectathon).  These are strengths of FHIR Connectathon today that we should not forget.

The mature part of that FHIR community is ready to move to a new step. I don't think that new step is all the way to what IHE Connectathon does, and certainly far away from certification (which is also what IHE Connectathon does for certain tracks).  The less mature parts of FHIR community do need a less formal place to play, however things like FHIR Dev Days are possibly filling this need?

So, where possible, cooperate with IHE Connectathon. Leverage the same tooling where possible. Leverage the same process and event space where possible.

IHE should focus on multi-standard use-cases, and domain specific use-cases. IHE should focus on end-to-end flows that are documented in a Profile or Implementation Guide. 

FHIR should focus on building block use-cases that use FHIR alone, and generally re-usable use-cases. FHIR Connectathon would be more the place to prototype, to investigate, to develop a concept, to build a consensus.

FHIR Connectathon should continue to advance the complexity of the scenarios, the Integration of small scenarios into larger ones.  Mature the testing of building block scenarios such that they can be held up as complete, something that can be used to do BDD or TDD. A 'standard' modularity beyond what we see today as a 'standard', that is not just the 'encoding', but also testing and block building.

This does not mean FHIR Connectathon doesn't do full end-to-end workflows. Just like it doesn't mean IHE would never do hackathon like things. The overlap will exist, it should just be clear.

Keep our eyes on the Purpose of a Connectathon

To a standards organization, a Connectathon is a way to mature the standard. Both IHE and FHIR have connecathon as a required part of their governance of maturity.

The purpose of a connectathon to a participant is to gain experience interoperating with your potential future partner in a real-world exchange. By focusing on testing in a safe place like a connectathon, one can push the limits of ones own software. The take-away is a confidence that when a customer needs your software to talk to that specific peer, you have high confidence that it will work right away, and if it doesn't then you have experience that guides your reaction including possibly calling on that personal relationship you created at connectathon. 

Formal checkmarks, or certification, are far less valuable than this. Mostly because reality will happen, and that checkmark or certification means nothing when reality isn't working.

Other articles of mine 

Sunday, February 4, 2018

Apple should have a HEART

Apple has re-entered the Healthcare space with their new announcement about support for a person to maintain their health data on their iPhone. There is really nothing technically new, but new or not is not the important bit. What is important is that any visibility given to the Health Data portability problem is good for making changes.

My understanding of what has happened is that Apple has moved from their own proprietary API support, to support for Argonaut defined APIs. These Argonaut defined APIs would qualify as a 'standard', they are based on #FHIR at an older version - DSTU2. So their adoption of a standard API is big. It is not hard, many have done exactly this. But it is big because it is Apple; and with Apple we get marketing of the usefulness of the concept, and we get a motivation for Providers to support the Argonaut API.

The bad news is that this is DSTU2, and that brings a risk that these APIs will be frozen at a non-Normative version of FHIR. I hope this doesn't actually happen. I hope that they evolve as FHIR evolves to Normative. The fact they started with DSTU2, and are ignoring the current STU3, is not good news for this hope of future normative FHIR.

Consumer empowerment aspect

My understanding of what Apple has done is adopt the SMART-on-FHIR security method, and the Sync for Science privacy method. They expect the Patient (their user and iPhone owner) will navigate to each of their supported Healthcare Providers, interact with their portal to give authority to release the records to that iPhone application. This is a model defined as "Sync for Science", a really unfortunate name as the name came from the original scope but the solution is generally useful. 

The benefit for Healthcare Providers is that they manage everything about the identity linkage, they own the username (password) the patient uses at their portal, and they own the linkage from that username to their Patient ID, and they manage the Consent holding the patient authorization to release to a specified and future authenticatable application on the iPhone..

The Healthcare Providers usually mange the Identifiers by sending their known patients a postal mail letter with a username and a one-time-secret. The person logs into their portal, gives the secret, and then proceeds to create the password they want. Once this is done, the Healthcare Provider has confidence they can manage the username/password, and that they know strongly which patient that represents.

The Healthcare Provider manage consents using whatever system they have internally. The consent never needs to be in a standard form, or any specific form or availability beyond what their organization needs. It just needs to utilize OAuth mechanism to bind the instance of the application the patient is using with the patient authorization (consent).

Lastly, because it is a relationship with the Patient themselves, when the Healthcare Provider release the data, they are logically releasing the data to the patient themselves. So no Business Associate concerns.

Apple in this case is just hosting an application, they are also the author of that application. They never need to know the Patient Identity, but they will be given highly sensitive patient data.

Why Apple changes everything?

So why is the fact that Apple is just doing what many applications have done before a big thing?

Apple has a huge number of people in the Apple ecosystem. Therefor the effort that existing Healthcare Providers need to do to support Apple is a better return on investment. Even if one only considers the 'bang for the buck' in terms of the number of that Healthcare Providers patients (bang) for the level of effort to do the work (even if high).  Note this is a motivation for Apple previous architecture that used proprietary API, but use of standards add to scalability.

Apple people trust Apple will keep their information and information about what they do on Apple private. This is unlike other big identity providers like Google, or YAHOO. The Apple people are special in this way, but so is the Apple organization. They have a proven track record (unlike YAHOO) of keeping their data secure, and they have a proven record of not letting their data get mined for advertising opportunities (unlike google).  Therefore the people are less worried that Apple will know what healthcare providers they are seeing. 


So the current solution is absolutely fine. The problem it has is the ability to scale. This is where HEART comes in. HEART is a standard specification, for which I have participated in the standard development, and have blogged about it.

The basic explanation is that HEART leverages OAuth, specifically a configuration called User Managed Access (UMA), to enable an "Authorization Server" that is selected by the Patient to represent Privacy access control decisions according to rules the Patient chooses. Essentially moving the Privacy authorization decision out of the Healthcare Provider. 

This is done by giving high assurance to the Healthcare Provider that the patient has chosen a specific HEART server as their authorization decision service. Thus the Healthcare Provider can trust any PERMIT or DENY decision that authorization decision service (the HEART service) makes for that patient in that circumstance. This enables the Patient to establish rules ONCE, where in the Sync for Science model the Patient must set the rules as many times as there are Healthcare Providers holding data on that Patient. Some patients have a small number of Healthcare Providers, others have many.

Apple should have a HEART!

This is an elegant solution, but it needs some major new player to make it come to life. Enter Apple. The two factors I mention above are critical. Patients trust Apple, and Healthcare Providers like Apple. These two are unique, as I mention above, but that is not enough.

The third factor is critical. Apple knows high quality identity information about their customers. Thus it is more likely that as an Identity provider, they will be able to more accurately, and more authoritatively, build the binding between their Identity (apple ID) and the various Patient Identifiers at the various Healthcare Providers. This patient identity problem is the biggest 'technical' problem in ALL of the Health Information Exchange (HIE) solutions. Binding a realworld identifier with a Patient Identifier in a way that has few false-positives (hopefully zero), few false-negatives (hopefully zero), and can't be abused by malicious actors (authenticatable and traceable).

Further, the Apple ecosystem is a place where some trust can be placed. If there are malicious misuse of the healthcare data exchange, the Apple ecosystem can be used to find the malicious actor. This is to say that there is trust that Apple knows what the Apple user is doing, and can find Bad-Apples. (sorry, had to).


Is it critical that Apple start to build out their HEART solution? No, but it is exciting that there is finally someone that I think could pull it off.

Wednesday, January 31, 2018

FormatCode granularity

I was asked the following question:
Confused as to the granularity required for formatCode.
The HL7 link seem to be at a course level:
but a recent update has format code at a document specific level: 
This links to FHIR and I assume MHD

Any advice?
My response

The FormatCode is there to differentiate 'technical format'. It is a further technical distinction more refined than the mime-type. So it is related to mime-type.

FormatCode is not a replacement for class or type. In fact it is very possible to have the exact same type of content available in multiple formats.
See article: Multiple Formats of same document

Including FHIR Document See article on FHIR Documents in XDS

It is true that IHE defined FormatCodes tend to be one-per-Profile, where as all of C-CDA R2.1 is one FormatCode. This difference in scope seems like a very big difference, but is at the technical level not different. That is to say that the IHE XPHR profile defines a unique set of constraints on the format of the content, where as C-CDA R2.1 similarly defines a unique set of constrains on the format of the content.

This is a good time to explain that what IHE calls a "Profile" is commonly what HL7 would publish as an "Implementation Guide". Thus they are often very similar in purpose.

While it is true that XPHR has only one type (34133-9 Summary of Episode Note), where as there are a set of unique use-cases that are each unique clinical 'type' of document in C-CDA R2.1. This is a good example of why formatCode is not the same thing as 'type'. Type expresses the kind of clinical content, where as FormatCode expresses the technical encoding used.

So the FormatCode focuses on the technical distinction as a sub type on mime-type; and should be as specific as necessary to understand the Profile (or Implementation Guide) set of constraints.

Further questions are welcome.