Monday, January 16, 2017

IHE on FHIR

IHE is still relevant in a FHIR world. But FHIR has changed the world, and IHE needs to adjust to this new world.

Profiling is still needed

The concept of profiling FHIR is still needed. The difference today is that FHIR is ready and instrumented to be Profiled. It even has a set of Profiles coming from HL7. This is not a threat, this is an opportunity for IHE.

IHE has a set of 10 profiles today (MHD, PDQm, PIXm, mACM, (m)ATNA, CMAP, GAO, RECON, DCP, MHD-I). These mostly are basic profiles, where a basic profile is not a complex profile, one that simply points to a FHIR Resource as the solution for a defined problem. This is helpful to the audience, but not adding much value. It is this lack of value-add that makes what IHE has done today with FHIR Profiling to seem to conflict with core FHIR.

Given that FHIR needs to be Profiled is a fact. A Profile is fundamentally a set of constraints and interactions applied to a Standard, to achieve a specific purpose. This is just as true with FHIR as with any other standard. 

IHE Needs to focus on Larger profiles. The basics of transaction (http RESTful -- for FHIR) is not helpful. But there are many larger workflows, larger data integrity, larger system-of-systems.

FHIR is Profile Instrumented

FHIR is ripe for being Profiled, and is ---Instrumented--- to be profiled. That is that it has mechanisms built into the specification to make Profiling more easy, effective, repeatable, reusable, and discoverable.

IHE needs to modernize

  • Audience expects use of Conformance resources and tools (see above) 
  • can only publish PDF – FTP is not good 
  • does not have a usable mechanism for creation, publication, and maintenance of vocabulary, value-set, and URL 
  • can’t publish conformance constraints in technical format (XML, JSON) 
  • PDF clunky, not hyper-linked, not helpful with programming tools 
  • Audience can’t find IHE if they start at the FHIR specification 
  • Yet, those Implementation Guides that use FHIR.ORG are found 
  • IHE has little development and deployment community engagement 
  • FHIR has multiple tools to help outreach and interaction with community 
  • IHE has email 
  • IHE workgroups are randomly engaged and acting different 

IHE needs to leverage FHIR 

IHE should not fear HL7, HL7 should not fear IHE. But working together with defined and distinct purpoose is best for both.

IHE supports FHIR.ORG. It is my understanding that FHIR.ORG is where activities outside of the specification will happen. Much of the things IHE does are in this space. This includes the tooling capability and support for profiling, but also involves community engagement.
IHE Profiles of FHIR must be ‘built’ using FHIR “IG Publisher” . They would be Balloted and published by IHE. The normal IHE supplement (pdf) might be okay, or need slight changes to Volume 2 (Volume 1 is good). Possibly a new Volume 5 to “hold” technical pointers from FHIR “IG Publisher”. This because today's format of Volume 2 is overly complex for a FHIR profile, especially http REST transport. Besides the Supplement form, the Technical bits would be published on FHIR Registry with pointers back to the IHE
IHE and HL7 Leadership engagement needed. We have gotten this far based on Individual efforts, but the scale of the solution is to big for individual efforts. We need formal agreements, formal sponsorship, and formal recognition. I think the time is ripe now.

Tuesday, January 10, 2017

FHIR documents in XDS

How does one put a FHIR Document into XDS?
How does one find a FHIR Document in XDS?

Both questions are asking very similar things. The key is the XDS fundamental metadata element mimeType. Let me explain...

XDS, more broadly the whole Document Sharing family, including XDS, XCA, XDR, XDM, and MHD. With a set of more narrow IHE Profiles in DSUB, MPQ, SeR, and MU.

To learn more on Document Sharing, start here:  Eating an Elephant -- How to approach IHE documentation on Health Information Exchange (HIE)

So the Document Sharing family is a Content Agnostic mechanism for sharing Patient specific Documents. Where the only thing that fixed is that this is an exchange for 
  • Patient Specific content -- so all the documents must be about a specified patient
  • Document format -- so it is not a REST server.
Metadata -- all the other metadata in XDS is there to help with searching or navigating through the documents than have been shared to find the right one to retrieve.

Initially Document Sharing was about 'historic' documents. That is a document is published, and in the future it can be discovered and retrieved. Thus the Document is "Shared".  Later it gained support for "On-Demand" documents. That is a document that is created when it is retrieved. An On-Demand document is still a document, it is just created a the time it is retrieved, and thus contains the current knowledge about that patient at the time.  Both of these might still be needed for FHIR Documents.

FHIR is more popularly known for the access model using http REST. That is where there is a server that holds current version of the knowledge. Systems can "Create", "Read", "Update", and "Delete" (CRUD) the knowledge using the https protocol. 

FHIR has a Document model. It is abstractly very similar to CDA, but uses all the more simple Resources and Encoding that FHIR has to offer. A FHIR Document is contained in a Bundle, and has a Composition, and all kinds of other stuff.  There is also a workgroup creating transforms from/to CDA -- CDA on FHIR. I am not here t give a tutorial on FHIR Documents, but need it clear that FHIR has the Document concept.

This is where the question comes from... If I have a FHIR Document, how would I publish that into XDS? If I want FHIR Document, how would I find them in my XDS system?  -- or more broadly any of the Document Sharing, because this applies to XDM (Direct Secure Messaging), and other...

So the FHIR Document, is Patient specific... so it should be clear how the Patient identity is related.

Key to FHIR Documents in XDS

The key is that XDS has a metadata element "mimeType". It is this that is the differentiates CDA from FHIR. So for a FHIR document the mimeType is either going to be:

  • XML: application/fhir+xml
  • JSON: application/fhir+json

FormatCode might be more powerful

The XDS formatCode holds the indicator of the technical format that the document follows. This is most of the time a URN that is defined in an IHE Profile, or other external body. This is very possible with FHIR Documents too.

I expect a set of FHIR specific "Implementation Guide", which is FHIR concept of an IHE Profile. of FHIR Documents to happen, these 'profiles' would have FHIR 'StructureDefinition' based constraints. The unique identifier for that StructureDefinition would go into the formatCode.

All the other metadata simply explain the content. 

All the other metadata is just as applicable to a FHIR Document is it is to CDA, PDF, DICOM, or any other format. Note that XDS is happy to carry proprietary formats like WORD too.


Sunday, January 8, 2017

NIST brings Privacy forward - NIST IR 8062


It is so good to see NIST bring Privacy out of the closet. I promoted the "hints of Privacy" that are deep within NIST 800-53, but always needed to enhance with a harmonized set of  Privacy Principles as a Framework, Privacy Impact Assessment, and Privacy Risk Management.

I lead my previous employer to create a "Design Engineering Privacy and Security Framework". This leveraged the NIST frameworks, especially SP 800-53, but we added an overall framework to bring in Privacy as equal goal to Security and Safety. Then added Privacy Impact Assessment to discover and manage risks to Privacy. Bringing in Safety is important in Healthcare, especially Medical Devices, as balancing the Risk Management plans between the three is important to get all three optimally reduced with all as low as possible.  My Venn is speaking to the kinds of technical controls available to address the risk domains. Nothing is ever clean bright line...

It is great to see NIST bring forward Privacy in the NIST IR 8062 - An Introduction to Privacy Engineering and Risk Management (in Federal Systems) as a distinct, yet related. 

Their stated purpose:
For purposes of this publication, privacy engineering means a specialty discipline of systems engineering focused on achieving freedom from conditions that can create problems for individuals with unacceptable consequences that arise from the system as it processes PII. This definition provides a frame of reference for identifying a privacy-positive outcome for federal systems and a basis for privacy risk analysis that has been lacking in the privacy field.
The great news about this is that their goal is to speak to those developing IT systems. Most of the other Privacy Frameworks are targeting those that are running IT systems. Even Privacy-By-Design, which declares it is 'design', is more about deployment than software or database design. Software engineers have trouble with these frameworks as they are not the prime audience. These other frameworks are speaking toward business management, and business risk. There is a need to speak to the engineering level audience.

NIST writes standards for the USA Federal Government, thus this standard is targeted for IT 'in Federal Systems'. This is more about NIST scope. This has NOTHING to do with the usefulness or global applicability of this specification.

The publication of NIST IR 8062 - An Introduction to Privacy Engineering and Risk Management (in Federal Systems) is just the start. I have hopes that these will refine and get more useful as experience using the NIST Privacy framework happens. 

Wednesday, December 28, 2016

Building more Software Architects

In close to 30 years as a professional engineer, I find that some people are natural software architects, while others expert software engineers struggle with architecture. There seems to be a characteristic of those people that can take a step back and 'architect'.  Is this learned? I suspect anything can be learned. If so, what is the critical catalyst that triggers and feeds that learning?

I seem to run into people given the role of software architect, when they are more a subject-matter-expert on a specific project.  On that project they are superior to all else, but they are not an architect. I seem to run into people who really want to become a software architect, but can't seem to hack it.

I also see some excellent Architects get pulled into Management where they waste away. Or worse they end up Program Managers, simply because they are the only ones that know all the moving parts. I am not saying everyone should strive to be a software architect, or that it is the pinnacle.

I ask, because I think there are far too few true architects today.  This more true as we enter the system-of-system-of-system world of Internet-of-Things (IoT). Being able to think short-term, long-term, horizontal-scale, depth-scale, modular, privacy, security, safety, reliability, continuous deployment, etc. All while being able to pivot when new information appears...

What are your top characteristics of a real software architect?
How did they get that way?

Friday, December 23, 2016

New Administration --> Fix Healthcare Problems

I am comforted to hear from many Healthcare leaders that their advice to the new USA administration is to continue the progress we have. Including continued support for Exchange, Direct, and CDA; while encouraging FHIR. I am VERY MUCH agreeing with these. Changing from these directions would kill much momentum and disrupt healthcare in a bad way.  There are others encouraging him to not kill Obama Care. I suspect he won't kill it but re-brand it. Or to create a single payer system.

There are some things in Healthcare that are broken in ways that are just nuts. Given that the new Trump administration is likely to be willing to do things that are against the norm for politics, I think we should recommend that these broken things be fixed. Because fixing them means radical change, and it appears that radical change is what we are in for over the next four years.

I will note that this was not my vote, and I am scared as hell. But it is a forgone conclusion, so we either stick our heads in the sand and hope our ass survives, or we do what we can to make the best of the situation.

My three things that are broken and need radical fix:

  1. Patient Identifier -- We need a national patient identifier. It won't be perfect, but it is badly needed. I have tried to make the point that this patient identifier can be opaque, and thus it can enhance Privacy. Today we share highly valuable demographics as that is the only way we can make a cross-reference. This is NUTS. Lets fix it. There are technologies today to allow us to have opaque identifiers while also assuring that the identifier can be validated. There are technologies today that would allow purpose-specific queries for cases where the patient didn't bring in their identifier but there is an health critical reason we need to look it up by demographics. There are technologies that can keep private the use of that identifier. Technology can scale today. This technology might be Block-Chain, but I don't think so due to the second need.
  2. Universal Privacy -- The patchwork of privacy regulations is getting in the way of progress. Declare that all humans have a right of Privacy. Define what that Right means. Be reasonable (right to be forgotten is not reasonable, useful but not reasonable). Override the patchwork of federal privacy, healthcare privacy, state privacy, etc. Privacy is not an option, or something someone can sell. Violations of these Privacy principles must result in punishment regardless of who or how the violation happened. ONE set of rules, even hard rules, will be easier to deal with than the patchwork. This will result in less privacy failure, and less privacy denial. THIS should not be specific to healthcare. ONE right of Privacy. Note it should not include in the regulation any technology specific requirement, as technology changes and thus the regulation will break.
      
  3. Incident Response Community -- Way too much something bad happens and knowledge of it is suppressed. I am not asking for public disclosure of everything. BUT the community should be enabled to learn lessons from others failures. This is true of at least Safety, Privacy, and Security.  There needs to be a way that authorized individuals representing every organization in healthcare can participate confidentially. That is they can expose a failure within their organization without adverse reaction (they must still meet regulated requirements). What I mean is that this is a peer group that will not use the information against their peers. What should happen is that their peers help diagnose what happened, come up with an action plan, and update the lessons-learned so that all the peers can implement that lesson. The result is a community that only gets stronger. This does NOT inhibit competition, as competition should be on health and experience outcomes.  This does happen in some circles, but needs government endorsement an encouragement.
I am sure there are others. I just don't have knowledge of them. I suspect there is HUGE gains to be made in supply-chain, payment-chain, and malpractice. These are broad areas that seem to me to be sucking far more money out of the system than they are providing value to the system. 

FHIR is not the solution to any of these broken things... but FHIR will be part of the healthcare solution.

Monday, December 12, 2016

IHE IT Infrastructure - 2017 work items

The IT Infrastructure workgroup has selected their work items for next year. It consists of 4 new work items, only one of which is a brand new concept. That is, the other three are re-casting of old use-case needs into a http RESTful world. There is only one of these new work items that is not FHIR based.

  1. Healthcare Provider Directory -- IHE has two standards: Care Services Discovery (CSD), which has been adopted in several countries as a way to manage health worker and health facility data and Healthcare Provider Directory (HPD) which has limited adoption. CSD and HPD are SOAP-based web services and are not compatible with systems deploying RESTful clients and servers
  2. Patient-Centric Data-Element Location Services -- This is a profile of profiles, addressing the use-case need for a element level perspective (i.e. FHIR) of the data held within Documents in a Document Sharing infrastructure (i.e. XDS). This profile of profiles will show how to bring various profiles together to add an additional layer of Provenance. Orchestrating: XDS, MHD, PDQm, QEDm, and various Document Content Profiles.
  3. Sharing platform for non-patient documents -- Support for documents like configuration-files, style-sheets, templates, instructions, etc. These have some metadata needs, driven by search use-cases, but will not contain patient specific information. 
  4. Remove Documents from XDS Repository -- Today the Metadata Update supplement has a method for removing a DocumentEntry, but that leaves disconnected the Document in the Repository. This work item will address all Remove use-cases, including the metadata and the document. 
In addition to these the committee also recognizes significant work needs to be done to 
  • Upgrade existing FHIR profiles to STU3. This work likely won't happen until late in the cycle as STU3 seems delayed. Most of these changes (MHD, PDQm, ATNA) will be mostly administrative changes. The changes to mACM, and PIXm might be simple update too, or might require significant consideration of best way to solve them given STU3 content. 
  • Maintenance task. The CP backlog is better than last year, but not much better. Therefore ITI will continue to focus on resolving this backlog. Meeting more often, weekly. Targeted meetings, so as to draw in the appropriate specialists.
I think ITI is maturing, with little net new big items. This could be because it is not being approached with the new work, but I suspect it is more a recognition that the existing infrastructure is supporting significant domain specific work.

Friday, December 9, 2016

War against TLS 1.0

I have gotten into multiple discussions on the topic of TLS 1.0. The result always seems to end up in no change of anyone position.

There are a few agreed to points:

  1. SSL is fully forbidden. 
  2. TLS 1.2 is best
  3. TLS 1.0 and 1.1 are not as good as 1.2
  4. Bad crypto algorithms must not be used (e.g. NULL, DES, MD5, etc)

However some people are taking a policy decision that TLS 1.2 is the ONLY protocol. They are allowed to make this policy change, as long as it doesn't impact others that can't support that policy

I have no problem with a war on SSL. I simply have a realist view on available implementations of TLS 1.2 on platforms where software is available to run. I would love for everyone to have the latest protocols, and for those protocols to be perfectly implemented. Reality sucks!

Standards Recommendation on TLS

What is expressly frustrating is that they point at standards as their justification. YET those standards explicitly allow use of TLS 1.1 and TLS 1.0 in a very specific and important practical case... that is wen the higher protocol is not available.

It is this last clause that seems to be escaping recognition.

The 'standard' being pointed at is IETF (the writers of the TLS protocol) RFC7525.  This isn't just an IETF specification, it is a "Best Current Practice" -- aka BCP195 -- May, 2015



Recommendations for Secure Use of Transport Layer Security (TLS) 
and Datagram Transport Layer Security (DTLS) 


Let me excerpt the important part of that standard from section 3.1.1: Bold and highlight added for emphasis. 


3.1.1 SSL/TLS Protocol Versions

It is important both to stop using old, less secure versions of SSL/ TLS and to start using modern, more secure versions; therefore, the following are the recommendations concerning TLS/SSL protocol versions: o Implementations MUST NOT negotiate SSL version 2. Rationale: Today, SSLv2 is considered insecure [RFC6176]. o Implementations MUST NOT negotiate SSL version 3. Rationale: SSLv3 [RFC6101] was an improvement over SSLv2 and plugged some significant security holes but did not support strong cipher suites. SSLv3 does not support TLS extensions, some of which (e.g., renegotiation_info [RFC5746]) are security-critical. In addition, with the emergence of the POODLE attack [POODLE], SSLv3 is now widely recognized as fundamentally insecure. See [DEP-SSLv3] for further details.

   o  Implementations SHOULD NOT negotiate TLS version 1.0 [RFC2246];
      the only exception is when no higher version is available in the
      negotiation.

      Rationale: TLS 1.0 (published in 1999) does not support many
      modern, strong cipher suites.  In addition, TLS 1.0 lacks a per-
      record Initialization Vector (IV) for CBC-based cipher suites and
      does not warn against common padding errors.

   o  Implementations SHOULD NOT negotiate TLS version 1.1 [RFC4346];
      the only exception is when no higher version is available in the
      negotiation.

      Rationale: TLS 1.1 (published in 2006) is a security improvement
      over TLS 1.0 but still does not support certain stronger cipher
      suites.

   o  Implementations MUST support TLS 1.2 [RFC5246] and MUST prefer to
      negotiate TLS version 1.2 over earlier versions of TLS.

      Rationale: Several stronger cipher suites are available only with
      TLS 1.2 (published in 2008).  In fact, the cipher suites
      recommended by this document (Section 4.2 below) are only
      available in TLS 1.2.

   This BCP applies to TLS 1.2 and also to earlier versions.  It is not
   safe for readers to assume that the recommendations in this BCP apply
   to any future version of TLS.

Note the last bullet tells you that you yourself must support TLS 1.2. A good thing if your platform allows it.

Financial industry PCI standard

Doesn't PCI require that organizations stop using TLS 1.0?
(Taken from Sequoia recommendation on TLS, as I a not a PCI expert)  As of 2016-11-23, the PCI issued the following text on their public web site at: https://blog.pcisecuritystandards.org/migrating-from-ssl-and-early-tls which states "The Payment Card Industry Security Standards Council (PCI SSC) is extending the migration completion date to 30 June 2018 for transitioning from SSL and TLS 1.0 to a secure version of TLS (currently v1.1 or higher). These dates provided by PCI SSC as of December 2015 supersede the original dates issued in both PCI Data Security Standard v3.1 (DSS 3.1) and in the //Migrating from SSL and early TLS// Information Supplement in April 2015."

Conclusion

Yes, it would be great if everyone had all the latest protocols, and that all those protocols were implemented without errors... BUT reality gets in our way. Especially so with Interoperability where reality is that we are trying to achieve Interoperability. 

UPDATE: Reader should note that RFC7525 is very readable and full of far more recommendations than just TLS version. Including detailed discussion of cypher suites, and authentication types, etc. There is no perfect solution or configuration. Security is RISK MANAGEMENT, and needs continuous care and monitoring by an expert.