Thursday, September 27, 2012

Level setting on Level of Assurance

Level-of-Assurance is NOT simple and can't be boiled down to 4 values. There is renewed excitement on the topic of Level of Assurance brings me back to a need to describe the fundamentals. The S&I Framework has projects on it, it also came up at the HL7 Security WG meeting. I look back in my blog and find I covered this best in March of 2011. The concepts are also covered in various articles (See References).

The fundamental that is important is that Level of Assurance applies at 2 abstractions. It is important that these two abstractions are recognized and independently managed:
  1. The Level of Assurance provided by the User Provisioning. That is how sure can we be that the identity represents the individual that it says that it represents.
  2. The Level of Assurance of the authenticated session. That is how sure can we be that the user logged on right now is the user that the identity represents.
In the case of (2) it is far easier to make a technical assessment of the level of assurance of the session authentication step. It is generally along a typical graph showing the types of authentication mechanisms, and how well those mechanisms are managed.

In the case of (1) there is very little technical aspects. Meaning that for (1) the Level of Assurance is almost completely a Policy and Procedure issue. There is high-level guidance provided by (NIST) Special Publication 800-63. This sets up the 4 different classes of level-of-assurance. But it does NOT set specific level-of-assurance values. The reason is that actual level-of-assurance can only be defined when bound to specific Policies, Procedures, Physical environment, as well as technology. This is recognized by NIST, this is why they didn’t define a vocabulary. This is also why you will not find a vocabulary anywhere.

In the world of PKI, the Level-Of-Assurance is defined by the Certificate Authority published document “Certificate Policy”. This is not very scaleable or reusable, so there is now an assignment to IANA to host a registry of Level-Of-Assurance policies. In this way there would be a registered URL that can be used as a Level-Of-Assurance vocabulary.

RFC-6711 - An IANA Registry for Level of Assurance (LoA) Profiles
This document establishes an IANA registry for Level of Assurance (LoA) Profiles. The registry is intended to be used as an aid to discovering such LoA definitions in protocols that use an LoA concept, including Security Assertion Markup Language (SAML) 2.0 and OpenID Connect.
Conclusion:
Don't expect that the NIST level-of-assurance model is executable and don't expect a standard to develop a simple vocabulary. 

References:

Tuesday, September 25, 2012

Presentations from HL7 WGM - Baltimore - Intro to Security and Privacy

At the HL7 meeting in Baltimore, the Security workgroup offered a half-day of free tutorial on security and privacy. This is a topic that we have available for tutorial for a couple of years, but it either doesn't get selected by the tutorial committee or not enough people sign-up. As a co-chair of the security workgroup I really want to get our message out. So we use our own workgroup meeting room and advertise that we will be teaching. For the Baltimore meeting we advertised this in the HL7 workgroup meeting brochure, and I also pushed it on my blog.
This session will focus on how to apply security and privacy to the health IT standards. It will cover the basics of security and privacy using real-world examples. The session will explain how each phase of design needs to consider risks to security and privacy to best design security and privacy in; and mechanisms for flowing risks down to the next phase of design. In addition, it will cover the security and privacy relevant standards that HL7 has to offer including: Role-Based-Access-Control Permissions, Security/Privacy ontology, ConfidentialityCode, CDA Consent Directive, Access Control Service, Audit Control Service, and others. These standards and services will be explained in the context of providing a secure and privacy protecting health IT environment.

First Quarter
Second Quarter

The good news is that we had about 15 people for both quarters. We are planning the same thing for Phoenix in January. I need to adjust the agenda to make sure we cover everything as Don and Trish didn't get much time to cover their slides. 

Friday, September 21, 2012

Advanced Access Controls to support sensitive health topics

I hope everyone who works on Privacy and Security in Health Information Exchanges felt the advance in the potential. What the VA and SAMHSA showed at the HL7 meeting was big. Not only was it big, but it was well done. Please review the News on this demonstration, the presentation given by Mike Davis, the Pilot specification, and the S&I Framework Data Segmentation for Privacy work. Pretty impressive stuff, right? YES it is. I am proud to be a part of  the S&I Framework overall Data Segmentation for Privacy workgroup.

This demonstration really showed the potential. However I think it highlights the potential without exposing the realities of demonstration. This is a demonstration that has not been evaluated from a Clinical Practice perspective. This demonstration modifies CDA documents without the Author knowing. This demonstration abuses some functionality of CDA. This demonstration abuses XDM metadata. This demonstration uses complex technology, leveraging highly advanced subject-matter-experts. This demonstration uses all greenfields technology. This demonstration is simply a demonstration, and when viewed in this light; this demonstration was fantastic.

I will describe the above concerns that  I have in a section below, but first I must describe my proposal. I have a proposal that does not need changes to standards, works with any document type including CDA, works with current technology, does not require high-technology on the receiver side, and gets the same job done.

Easier Way: 
There is an easier way. One that does not require any changes to standards, although I will admit that it is less elegant and more blunt. The use-case space we are trying to solve is that many HIE simply recommend against sharing sensitive topics. Thus for a population with sensitive health topics their data is excluded from HIE, and possibly the normal information that is tightly coupled with the sensitive information. I have proposed a more blunt solution, one that IHE built into the XDS metadata, yet one that has been rejected by some in the S&I Framework and ONC leadership. My proposal is somewhat represented by section 3.5 in the S&I Framework specification (4 pages of text).

My solution is to have the original CDA document in full form, mark this as “Restricted”. Then create a Transform of the CDA document that has the sensitive data removed, mark this as “Normal”; and use the Metadata ‘association’ relationship to indicate that this is a “Transform”. In the case where the documents are managed in something like XDS, this is all registered this way. Being registered does not mean it is accessible, Retrieval utilizes the metadata to control access. In the case of a PUSH using XDR or XDM (Direct); one would choose to put in both forms if the recipient has the authority for both, or just put in the normal if the recipient has only the authority for normal data (clearly don’t send it if the recipient has no authority). In both the PUSH and PULL case this is a simple Access Control rule applied to well established Metadata.

I would encourage the use of Clinical Decision Support (CDS) that the demonstration used to detect sensitive topics. This detection of sensitive topics is NOT something that security experts are good at. So I would still suggest that a tool like this can be used to flag the Author with the need to create a transform only when it is needed with suggested transform. The method of using CDS to detect the presence and XSLT to transform the CDA is a fantastic idea. I just want the Author of the document to agree that it still is a worthwhile document that they want their name on. The main difference is that I just want the sensitive data removed, whereas the VA Demo used an extension to apply tags at the element level removing some sensitive data and leaving other sensitive data in the document.

In my solution using the CDA document requires no new technology, it is just a CDA document. It doesn't have extended metadata or element tagging that would not be understood by current technology. A system that is allowed to get both Restricted and Normal data; should be careful to import the data marked Normal first as it doesn’t come with Restrictions. Then inspect the Restricted document and if necessary import anything new found AND track the restrictions.

This solution works with ANY document. Including a document that uses FHIR encoding, or Green-CDA encoding, or CCR, or PDF, or DICOM-SR, or anything else. I like CDA, but if the only solution we have forces CDA, then we are cutting off documents from the past and cutting off new document types in the future.

None of this requires new standards or abuse of standards. YET it results in the same functionality on the use-case need. This solution put up against the one demonstrated by the VA/SAMSHA makes their demo look like a Rube Goldberg machine.

Yes, the solution requires that within the USA there needs to be a definition of 'why would you mark a document Restricted', and 'who has legitimate access to documents marked Restricted', and 'what are the receiver behaviors required when using Restricted documents'. All things that must be done no matter what technology we use.

Demonstration Dirty Details: 
I don’t want to sound like the demonstration was a complete farce. IT WAS GREAT. I only want to point out that there are some details that are important to recognize and that the solution is not close at hand and easy to implement.

I already pointed out above the CDA entry level tagging, which would not be understood by current systems. It would also not be possible to get some systems to understand it. Further the topic was brought up with HL7 Structured Documents Workgroup (owners of CDA), and they indicated that there might be a better way to do it. They needed more time to think it through. Yes, making a mistake on a technology like CDA can mean long-term problems. CDA are documents, intended to be used over the life of the patient; They are not messages that are transitory.

I also addressed the need to present the transform to the Author. It is very possible that the transform is of no clinical value. More important is that a CDA document is ‘Authored’. It is a legal representation by the Author. This is not true if the Author never sees the content with their name on it. Keith has covered this far better than I have in Data Classification, Redaction and Clinical Documentation

My biggest concern is the abuse of XDM metadata. These additions are allowed by the specification, but they are expected to be processed without fail by the recipient. This is not acceptable to extend a specification and expect compliance. Extensions are allowed by many standards, but extensions behavior are not forced upon the recipient. I don’t disagree with the need to convey the Obligations; I just want a reasoned and standards based solution. This is an important advancement, we need to do it the right way. Not force a hack upon the world.

Lastly they did magic using very talented subject-matter-experts using greenfields technology. It is important to recognize the non-trivial work necessary to build this functionality into a real-workflow with real-users and sensitive to patient safety.

Conclusion:
I have been involved in standards development for a very long time, I know that it takes many years for a reasoned solution to be fully ‘standardized’. This period does not mean no-one is using that method, indeed pilots and implementations are important to proving viability. Then there is the typical 5 years it takes the market to pick up and use the standard. Thus 7+ years is not unusual. I would rather see my Easier Way used today, than continue to simply forbid sensitive topics from entering a Health Information Exchange.

So, why was my solution not part of a Pilot? Because it is already available, and thus doesn't need to be tested. I have been involved in the development and deployment of HIE technology. I know of multiple exchanges that were working on Policy solutions that would have enabled sensitive topics. For example Connecticut. These HIE solutions were put on mothballs as ONC has forced everyone to implement Direct first. I know of multiple states where progress was put on hold. I like Direct for what it was intended to solve, but I hate what the ONC mandates on Direct have done for HIE progress. It has not been a net positive.

The Easier Way can be used with Direct TODAY, and XDS, and NwHIN-Exchange. My solution does not require any changes to Standards. My solution works with any document type, not just CDA (e.g. Green-CDA, FHIR, DICOM). I ask that the Future-Nirvana not get in the way of the Good.

UPDATE:

I just remembered that the Easier Way has been demonstrated multiple years at HIMSS. The scenario is fully explained in this article on the HIMSS 2008 Privacy and Security Use-case. This is what inspired the HIE like Connecticut to put this into their infrastructure.

See also:

Saturday, September 15, 2012

The Magic of FHIR

As I look back on this week of HL7, it was like no week of HL7 I have ever had before. Not only did we have some fantastic and productive discussions on the Privacy and Security front; but there was such openness and excitement about FHIR.  Grahame has really stepped in something here. I personally think it has so much more to do with Grahame himself. The excitement he has is infectious. His personal background with experience with all of HL7, and I found out on Thursday he also has implemented DICOM in his distant past; gives him great perspective. He has surrounded himself with highly competent team. He has forced open and transparency; and strong governance. However that would not be enough to make it as big as it is.

At the beginning of the week at the FHIR Connectathon someone asked if the reason why ‘this’ was so easy is because of REST? The answers given were very positive to the REST approach. There was discussion that this REST focus should not be seen as purely HTTP-REST. It was pointed out strongly that although the initial pilots are using HTTP-REST, there is no reason that the FHIR Resources can’t be serialized into a Document, or communicated using PUSH technology, or even transmitted using HL7v2, v3, SOAP, etc. The FHIR “Resources’ are truly transport agnostic. The REST approach simply focuses the design on the concept of “Resources”. These other forms need quite a bit of discussion before they are interoperable, but it is powerful. I thought about the question of why was this so big throughout the week and although the REST philosophy is very much a contributing factor it is not alone enough to make it as big as it is.

Part of the FHIR approach is to address the most mainstream use-cases and leave the edge-cases for extensions. Indeed the concept of extending a FHIR Resource is encouraged. It is discouraged for you to do something that is abusing the specification, such as carrying the same information in an extension when there is a legitimate FHIR way to do it. Extensions allow the FHIR core to stick to the basics. The basic philosophy also recognizes that MANDATED values are likely bad. That is not to say that there would be Profiles that do mandate, but the core leaves as much optional as possible. This is in theory at the base of any standard, but it is stated boldly in FHIR. A contributing factor, but not alone enough to make it as big as it is.

Back to Grahame, his documentation tooling is amazing. The whole FHIR specification is documented in an XML spreadsheet. This core table of truth is processed by JAVA application that he fully publishes, that spits out EVERYTHING. I can’t claim that I can prove this, but everything that I heard about is generated by this application from this spreadsheet. This includes XML schema, Test tools, documentation, Java objects, C# objects, JSON, examples, etc, etc. I would not be surprised if this thing spit out stuff that we don’t know we need. This tooling is what I have heard many standards organizations want to be able to have. Grahame has made it happen for FHIR, but this alone is not enough to make it as big as it is.

A factor that I heard spoken of, but never spoken of as a factor, is the ready access to programming tools that make the grunt work totally hidden. I am not a programmer, I really want to get my fingers back into programming, but never find the time. Even if I found the time, it is something that one needs to use often. I think this is why Keith continues to do all kinds of demo code that shows this or that. It is really hard to be in the standards development world and yet also have responsibilities for programming. I think this is the sleeper HUGE factor. This array of tooling that is readily available makes super easy the processing simple-XML, JSON, and Atom feeds. I heard and saw lots of people tell me just how easy it is to process simple-XML, JSON, and Atom. This was also the feedback that I got on the IHE MHD Profile, however that didn’t really take advantage of this power, yet… I know that this factor is far more powerful than the factors I have said above, likely more powerful than all of them. We all know that the use of a standard is what makes that standard powerful. This tooling factor will make FHIR easy to use. This surely vindicates Arien Malec and John Halamka; they did tell us so. Clearly as big as this factor is it is not enough to make it as big as it is.

I have worked to coordinate FHIR, hData, and IHE MHD. I have had detailed discussions with Grahame on the concept of pulling the IHE work into FHIR, we are going to see what this might look like. I have worked with the FHA/S&I effort on RHEx as well. At this meeting I worked to pull into the tent the DICOM WADO work that is upcoming. Each of these efforts are independent, and can choose to cooperate or simply align or compete. I am amazed at how cooperative they all have been. It is early in these efforts themselves, and even earlier in the cooperation phase. I am still hopeful that we each can add value and thus the result is more powerful than any one project could be. This was also jokingly referred to in references to how to pronounce FHIR --> "FOUR"

There are many challenges that will need to be addressed. We just touched upon Security and Privacy this week. The actual problem is far bigger than a security layer like RHEx. It includes object classification and tagging. It includes an understanding of the difference between an object classification and the meaning of a communications, things like obligations. These are areas that we are working to develop even in the abstract, much less in a medium like FHIR that wants to keep everything simple. Related to this is the data provenance, aggregation and disaggregation, de-identification and re-identification. There are areas like clinical accuracy, and usefulness. There are concerns around patient safety, specifically regarding cases where not all the data was displayed to a treating doctor because that data was not understood. What does it mean to understand, and what does mustUnderstand mean?

I am worried that success for the intended use-cases will be abused for non-intended use-cases. This is of course the problem any standard has. But I see it rampant in Healthcare, mostly the abuse is government mandated.

As I write this, I am indeed listening to “The Firebird” by Stravinsky. There was joking on Twitter that one must read the FHIR specification while listening to “The Firebird”. Somehow it is working.

The excitement is not due to any one thing, nor any specific combination. It is not REST. It is not simple-XML or JSON. It is not Grahame. The excitement is driven by all of these factors converging at just the right time and place. Time will tell if this turns into something that can survive for a long time. We must be very careful to keep this in perspective.

Monday, September 10, 2012

IHE Mobile access to Health Documents - Trial Implementation


Updated August 2014 -- IHE is updating the MHD profile to align with FHIR (DocumentReference, DocumentManifest). Please refer to the IHE Wiki MHD Status page for current information. Also see the mHealth topic for updated blog articles.

Sunday, September 9, 2012

Meaningful Use Stage 2 - Transports Clarified

I got the answer to the question I have been hoping to ask. The question I asked is: What does ONC think they have specified with the three transports, with the options of (a)+(b) or (b)+(c)? For more detail on how this question comes about see my blog articles: Meaningful Use Stage 2 : TransportsMinimal Metadata, and Karen's Cross or just Minimal Metadata. Essentially I wanted to ask the question so that I can understand the desired state. Once I know the desired state then I can help get clarification.

The good news is that the ONC goal is a very reasonable and very progressive. The goal is to recognize that basic Direct transport is a good start, but that it is insufficient to support more complex and robust workflows. They recognize that there are many Exchanges using XDR/XDS/XCA including CCC, Connecticut, Texas, and NwHIN-Exchange. Thus they want to encourage, through naming specific options, EHR vendors to reach beyond the minimal e-mail transport down the pathway of a more robust interaction. I speak of this stepping stone approach in a couple of blog posts so I am happy about this. Stepping stone off of FAX to Secure-EmailWhat is the benefit of an HIE, and HIE using IHE.

I am not going to fully define Karen's Cross, but the whole Karen's Cross specification recognizes that although one might be able to take an inbound Direct message that is not in an XDM content package, and invent hard-coded metadata for the XDR outbound link; that if the inbound Direct message comes in with XDM content packaging then it comes in with exactly the metadata that the XDR outbound transaction needs. Note that an inbound XDR that needs to be converted to Direct is easy, the XDR content is just encapsulated in XDM and sent over Direct.

Which does remind everyone that Direct does require that you support receiving XDM content; which minimally means you  have the ability to read a ZIP file, and display the INDEX.HTM file that will be on the root of the  ZIP file. Really easy, and almost hard to make not work.

So what ONC wants to encourage is the sending of Direct messages with the XDM content packaging. This is a part of the Direct specification, but is in there with conditions that make it only required IF your sending system has the ability to send using the XDM content packaging. So, ONC came up with the first option [(a)+(b)] to encourage the use of XDM content packaging with the Direct specification.

The third option - what they call (b)+(c) - is trying to encourage the use of XDR with the specific secure SOAP stack. They could have done this with simply XDR+XUA+ATNA, because that is what the interoperability specificaiton turns out to equal. I would say that the Secure SOAP Transport -- (c) -- is indeed more constrained than ATNA and XUA; as it does force only TLS and has some vocabulary bindings.  The advantage of using including the (c) as a standalone transport, sets the stage for future transports such as a secure HTTP/REST (e.g. RHEx); and by being just a secure SOAP stack they encourage experimentation with XCA/XCPD.

So we now know that the testing for (a)+(b) would be testing the Direct Transport using XDM content packaging; and the testing for (b)+(c) would be testing XDR with mutually-authenticated-TLS and XUA user assertions. This is a good path that I can help ONC get clarified. The diagram above is originally from EHRA and has been used by IHE. I have co-written the IHE white paper on this topic and presented along with Karen Witting. This is a fantastic result, now the hard part of getting ONC written FAQ; and test plans that hit the right things.

Friday, September 7, 2012

MU2 Wave 1 of Draft Test Procedures -- Integrity Problem

The first wave of Draft Test procedures is out: 
For more information, and the Wave One 2014 Edition draft Test Procedures, please visit http://www.healthit.gov/policy-researchers-implementers/2014-edition-draft-test-procedures
This is an opportunity to see if the interpretation you have of the Final Meaningful Use Stage 2 rules as the Testers have. I looked at three of the test procedures that fall into my scope.
  • §170.314(d)(5)  Automatic log-off  Test Procedure
    • I think they correctly changed this to reflect the various ways that are used and are appropriate. It will be interesting to see specific types of EHR technology against this procedure, it is possible someone might still be confused.
  • §170.314(d)(8) Integrity Test Procedure
    • I think they are way off base, or too aggressively focused on the detail and loosing sight of the overall. They continue to have the language in their test procedure that have caused me to write my most popular article of all times "Meaningful Use Encryption - passing the tests". I am not happy about that article, but it gets to the point. The requirement for Integrity just like the requirement for Encryption is there to assure that where ever Integrity or Encryption technologies are utilized that legitimate and approved algorithms are used. Quite often this is next to impossible to prove. The best way to prove these is where interoperability protocols are used. The Direct Project, and the Secure SOAP Transport have these algorithms built in. So, testing these for interoperability will have the affect of testing the Integrity and Encryption lines. Thus a standalone procedure should focus ONLY on uses of Hashing or Encryption that is other than specified in the Transports section. Which nothing but Transports are required. Thus this procedure should start with "The EHR vendor shall identify how they utilize Integrity other than through defined Transports.". And then focus the testing on those.  This is not going to make it easy, as the place where this is going to happen is transparently in Databases, and Data-At-Rest. Thus there is nothing that the EHR vendor can possibly show. I think this item should be… Not Tested outside of integrated as part of Transport.
  • §170.314(d)(9) Optional—accounting of disclosures Test Procedure
    • I think they got this one in good shape too. It now is clear that the interpretation of this optional criteria is a User Interface where the user can indicate that a "Disclosure" has happened. Thus this is not any automated accounting, but does provide for a way to identify disclosures using readily available technology at the fingertips of those that might be involved in a legitimate disclosure. The test procedure seems reasonable as well.

Thursday, September 6, 2012

On The Meaningful Use Stage 2 Rules

I have written many articles on the US Centric - Meaningful Use Stage 2. Some are very deep analysis of specific problems. It may seem that all I have to say is negative and nitpicky things. I want to make it clear that I am very happy with how Stage 2 came out. I just find that others have done a fantastic job of outlining the rules, so I have focused only on deep analysis in my area of expertise where I see potential confusion.

First, the ONC and CMS summaries are great stuff. They do leave out details that end up being the subject of my blog articles. But they are really good summaries. Keith has clarified much as well. I have seen plenty other summaries, I wish I had a catalog. There is just no good reason for me to pile on.

Meaningful Use:
More Privacy/Security Topics are available on my blog.

Tuesday, September 4, 2012

Meaningful Use Stage 2 : Transports

There are two perspectives
1) The Transport standard is clear. It is Direct. Everything else said in the regulation about transports is optional and therefore meaningless.
2) There is still the problem of the (b) Transport, which pulls in (b)+(c) and also (a)+(b)

Simple View – Minimum work to get Certified
There is no question what is minimally required, it is the Direct Project. So, test to this and be done. This is the easiest way to get through certification and get your CEHRT stamp.

Note that just because the Direct Project is the only required Transport, does not mean that it is the only Transport that can be used. I heard Steve Posnack reiterate this on many of the webinars. The CMS rule doesn't differentiate between transports, it is more concerned with content and outcomes (AS IT SHOULD BE).

The Direct Project is a pointed message, but not necessarily without issues, here are a bunch of my blogs
Problems in Extended Transports
Lets just say that you don't like doing Minimum work for Certification, or that  you want to go above-and-beyond, or that your EHR is so far from supporting Direct that you need alternatives. On the last topic, sorry but you must support Direct. The problems with the Transports other than Direct is simply, confusion.

I cover these problems in detail in Karen's Cross or just Minimal Metadata. The Diagram at the right is informative, it is Karen's Cross. The GREEN arrows are the (a) transport, Direct. The BLACK arrows are the (c) transport, Secure SOAP. The RED arrows are the (b) transport, a proxy service that converts (a) to (c) and (c) to (a). The (b) transport is not a transport and thus can’t be called upon as a transport. Add to that that the MU2 specification always grouped the (b) transport with either (a) or (c) and one has something that simply doesn't compute. I guessed that ONC really just wants the Minimal Metadata, but am not sure. I think they are actually asking for CEHRT to somehow certify that they can work in an operational environment where someone else provides the (b) proxy service. The use of the (b) proxy service is an operational aspect that should have been placed upon the CMS side.

Besides the Karen's Cross or just Minimal Metadata issue, one can just look at the (c) transport and treat it as I outlined in Meaningful Use Stage 2 -- 170.202 Transport. Essentially the Secure SOAP stack is simply the lower half of all of the SOAP based profiles found in IHE. ONC has chosen to chop horizontally, where IHE builds vertically. This is shown in the eye chart to the left, which is  not intended to be readable. Either way you slice it you have a secure SOAP transport stack that is carrying some SOAP content.

Thus it matters little if you use any of the Data Sharing profiles from IHE (XDR, XDS, XCA) or the Patient Management profiles from IHE (XCPD, PIXv3, PDQv3). What does matter is that you MUST be using ATNA secure communications, and XUA user assertions. YES the IHE profiles are the parent of the NwHIN-Exchange specification and are compatible. It is not that I work hard to propagate my view of the world, I work hard to keep divergence from happening when it is not necessary. I am very willing to entertain necessary divergence, and have lots of evidence that I support Direct.

But what about Encryption and Hashing?The MU2 requirement gets specific about Encryption and Hashing, but don’t worry. 
§170.210(f) Encryption and hashing of electronic health information. Any encryption and hashing algorithm identified by the National Institute of Standards and Technology (NIST) as an approved security function in Annex A of the FIPS Publication 140-2 (incorporated by reference in § 170.299).
This Encryption and Hashing requirement is important but not hard to meet. The important part is that proprietary encryption is unacceptable, old encryption algorithms are unacceptable. Modern encryption (AES and SHA) are acceptable. The use of FIPS Publication 140-2 allows HHS and CMS to benefit from the intelligence community assessment of cryptographic algorithms, thus moving up automatically when the intelligence community does. The use of Annex A rather than the core FIPS 140-2 specification allows for relaxed rules around certification, this doesn’t change the technical aspect but it does greatly reduce the invasive code inspection requirements of actual FIPS certification. The Annex A is very short, 6 pages long. The summary: Encryption AES or 3DES; Hashing SHA1, or higher;

All of the Transports include fully security as part of the specification, so they are by definition already compliant with the Encryption and Hashing requirements.
  • Direct – S/MIME authenticated using X.509 Certificates/Private Keys, Encrypted with AES128 or AES256, and Hashed with SHA1 or SHA256.
  • Secure SOAP –secured with Mutual-Authenticated-TLS using X.509 Certificates/Private Keys, Encrypted with AES, and hashed with HMAC-SHA1, for more details see: Moving to SHA256 with TLS requires an upgrade
  • Secure SOAP – End-to-End - This is in IHE ATNA, but not in MU2 – There is an option to use WS-Security end-to-end security, but this requires also an update of common SOAP stacks and is administratively harder to achieve. Risk Assessment needs to drive the cost benefit.
  • Secure HL7 v2 – There is no mention of this dirty little secret, but all of those HL7 v2 requirements in the regulation would also need to meet the Encryption and Hashing requirement. The solution here is to use the Mutual-Authenticated-TLS as is used in the Secure SOAP stack. Many toolkits support this, but not all of them. At IHE Connectathon we run into people who have forgotten to test this, they usually get going quickly.
  • Patient Engagement - Secure Messaging – There is no guidance on what Secure Messaging is, and I think this is the right solution. But whatever is used for Secure Messaging must also meet the § 170.210(f) requirements. Given that the requirements are just focused on Encryption and Hashing; this is easily met with a typical HTTPS web-portal.
  • Data at Rest – End-user device encryption. -- Okay this isn't a transport, but whatever solution used to protect data at rest, it must also meet the Encryption and Hashing requirements. A good commercial solution or even the solutions built into operating systems cover this. What they don’t cover is KEY MANAGEMENT. If you don’t protect the key then it doesn't matter how well encrypted.
Summary:
The transport to certify is clear, just get Direct done somehow. If you can’t do direct, then you are going to struggle with trying to figure out what is going to be required of you. The Test Tools will likely answer this eventually. There certainly is nothing clear today to start developing toward. Stick with XDR, which is a subset of XDS. This solution is highly reusable.

Four years of blogging

So what? I was pushed into blogging because I found that I was explaining things over and over. The sad part is that I still tell the same stories over and over, hopefully to new people. I hope that each time I add something and truly am communicating and educating. I know that I am learning as I go, so I suspect the articles get better.

In september of 2009 I posted these articles. I am not excited that this list could be produced today. I hope that I am making progress:
I have a "Topics" button on my blog that contains pointers to the most useful articles from my blog, arranged by topic. I keep that up-to-date. Here it is as of today:

Security/Privacy Bloginar: IHE - Privacy and Security Profiles - Introduction

User Identity and Authentication
Directories
Patient Privacy controls (aka Consent, Authorization, Data Segmentation)
Access Control (Consent enforcement)
Audit Control
Secure Communications
Signature
De-Identification
Risk Assessment/Management
Document Management (HIE)
Patient Identity
The Direct Project


Other

Monday, September 3, 2012

Meaningful Use Stage 2 - Audit Logging - Privacy and Security

Updated August, 2014 -- It seems people are still reading this post. I am simply adding a forward pointer to the Audit Logging Topic where you might find fresher blog articles.

Saturday, September 1, 2012

Direct addresses- Trusted vs Trustable

I cover the Identity - - Proofing problem that Direct is having right now. Some are arguing that Identity Proofing is not needed, I argue that it is always needed but sometimes the proofing is done in a distributed way sometimes it is centrally done. The Identity-Proofing is not the critical thing that is needed. The identity proofing is just an example of people confusing Trusted with Trustable. Trustable is what we should be focused on. Trustable is associated with Identity and Authentication. Trusted is a result of a determination of Authorization. Identity-Proofing status is important to carry in the Identity, but it is just an attribute. It is one of many attributes in the Certificate that is used to determine if this certificate ‘is the one’ – that I should Trust.

The Deep Dive
This brings up the question of what are the important use-cases, and exactly what is the RISK that the certificate system in Direct needs to solve.

This is easiest seen if I start with the controversial use-case. The one most likely to be used by General Provider with their Patients. In this case the GP knows who their patients are, at least well enough to help them get healthy. Thus the GP really has done a Proofing step. In this case the Patient has a conversation with their GP and tells them their PHR e-mail address (e.g. HealthVault account e-mail address). The GP puts this e-mail address into his Direct solution for sending secure e-mail and documents. Note that I will show that the case where the sender didn’t do in-person-proofing is a super-set with a little additional requirements, and no less requirements.

The Risk
In this case when the GP sends something he needs the infrastructure to make sure that the certificate that is discovered is truly the certificate for the e-mail address given. That is a malicious individual, the attacker, wants the content of the Direct message. In this case the attacker needs to get the sender to send the secured e-mail with the encrypted envelop targeting a certificate that the malicious individual has the private key for. The Direct project uses really good security, both signed and encrypted e-mail using really good crypto algorithms. Thus the attacker must become what is seen as the intended recipient. Let me explain:

In non-secure e-mail (normal e-mail) this is rather easy, you just attack the DNS system and return faster your mail-server address as the location for the e-mail to be sent (the DNS - MX Record Lookup, or DNS server address lookup). If you do this with a S/MIME protected message you will get nothing but the encrypted message. If poor encryption is used, you can beat on that e-mail message until you crack it, but if we choose good algorithms like then this effort takes too long to be useful, and Direct specified strong algorithms. So we have this risk handled, through picking an end-to-end solution like S/MIME and good algorithms. And we leave the transport open for best flexibility, flexibility that is really useful for redundancy and robustness. As strange as it might seem, we simply live with this risk because it has a very-low impact. See How to apply Risk Assessment to get your Security and Privacy and Security requirements

The attacker wants access to the content of the secure e-mail, so they need to attack DNS again, but this time get it to return fast with a certificate that the attacker wants. If the sender system doesn’t check this certificate, but just sends to it; then the attacker is back at non-secure e-mail and has full use of the content sent. The Direct Project requires checking of the certificate that I will outline below, so compliance to the specification is important. Note that the certificate checking is mostly what all systems do for checking certificates.

There are secureDNS and secure LDAP solutions. They are not recommend because there is a very robust system to validate Certificates including discovering the assigning CA and checked for revocation. This system is preferable because it allows for far more flexible DNS and LDAP configurations; but mostly because it is one system that works equally well regardless of how you got the certificate (which means we could add new ways to get the certificate and it would still work, such as from a previous secure e-mail conversation). Note that if you use SecureDNS or secureLDAP there is little overhead, and if it works for you then there is no harm in doing the verification multiple ways. SecureDNS might just mature enough for us to rely on it, but it is simply not needed for the Direct need.

The sending infrastructure will ask the whole worlds DNS and the whole worlds LDAP for a certificate that claims to be ‘the’ certificate for the e-mail address that the patient provided. The GP is very confident that the e-mail address is right, and besides if it wrong then it is the Patient that deceived. But back to the problem. Let’s just say that 10 certificates claim to be ‘the one’, how does the infrastructure know which one to choose. It can’t be the FIRST certificate to come back, as that just means the attacker must just get their response back first, which isn’t hard. The sending infrastructure must thus not stop looking after receiving one response. It must wait a reasonable time for all potential candidates. During this time it can be ‘validating’ the certificates that have arrived.

Note that multiple responses is not an indication of an attack. There are good reasons for multiple certificate: The most likely is: As your certificate approaches expiration but before it does expire, you need to get a new certificate issued. It is important that the expiration times do overlap to allow for latency in the system, yes many months worth of latency is needed. Once you have your new certificate you need to send both your old and your new certificate when a DNS or LDAP request comes in, and thus both are valid. In this case, the sender should choose the ‘newest’, for best results. There is really no sending reason to select the oldest, however there are signature validation reasons why you might need the old one.

So the RISK is exposure of content because the certificate you use to encrypt the e-mail is not the legitimate certificate for the e-mail address you want to send to.  Note that this risk was known, and is THE risk to be resolved in the original Direct Project threat assessment.

The solution
I will repeat that in this case we only want to make sure that the certificate is the legitimate certificate for the e-mail address we have. In this case we don’t care about proofing. Indeed even if I had started with a use-case where the certificate needed to include proof of high-assurance proofing, all these steps are needed. Thus the main problem is NOT the assurance, it is the legitimacy.

First we must do cryptographic checking. Doing this on a certificate is easy, it is available in many toolkits and operating systems: Is the cert signed, is the expiration time still valid, is the chain to a trusted CA, is the cert not revoked. The cryptographic part is well known, yes one must be using mature algorithms and key lengths. Please don’t allow RSA 1024 or MD5 hash. Look to FIPS 140-2 for guidance, and bump it up. In this case future-proofing is cheap and certificates tend to be around a long time. What is not solved yet is the list of CA certificates that I trust to issue legitimate certificates. Let me defer discussing where this magic-list comes from until later.

If I have cryptographically tested the certificate, then I know that the content of the certificate is valid. The next couple of steps will be looking at the attributes inside the certificate and making more decisions. The following checks are important for all use-cases. I want to make sure the certificate claims to be for the e-mail address that I want to send to. This is needed by all sending use-cases. There are some short-cuts going on here, I don’t like them but we just need to deal with these short-cuts in the future.

Further, I need the root CA to be claiming that it issues certificates for the domain of e-mail addresses. This is likely just to the domain-name part of the e-mail address. I will show later in the magic-list why this is important. A certificate issued in violation of the CA policy needs to be reported as a potential indicator of a compromised CA. The checking of the e-mail address and the CA domain issuing is important as it stops false certificates.

I also need to make sure that the certificate is one issued for the purposes of S/MIME e-mail encryption. This is another reason why I might get multiple legitimate certificates. Some of the certificates might be constrained for just digital-signature.

Identity Proofing:
It is only here that one cares if the identity in the certificate claims to be of a specific Identity Proofing Assurance level. And the use of this knowledge is not security-layer use, it is application logic. If the sender has done in-person proofing; then any identity assurance level is fine. It is only if the use-case has not done an in-person proofing that the infrastructure should utilize the Certificate claim of Identity Proofing level. Yup, this all I need say.

Magic-list
Now comes the hard part, how do you get the magic list of trusted CA certificates? You start from some magic list of trustable CA certificates and make some local decisions. You might outsource this local decision making to your Full-Service-HISP, but it better be transparent between you and your HISP.

Identifying the list of trustable CA Certificates is a hard problem, and done wrong can be really wrong. See the mess that has happened in the Web-Browser world where they took a specific short-cut that eventually caught up to them. A short-cut that seemed right at the time, and I would agree that it was the right short-cut AT THE TIME. The problem is that once you have taken a short-cut, it becomes the de facto pathway and no one ever challenges that short-cut. This one should have been challenged and replaced in the past 5 years. So, if we take a short-cut; please put an expiration on that short-cut.

Identifying the list of trustable CA Certificates is the space that DirectTrust is trying to fill. How does someone choose the list of CA certificates that they are going to trust? I will repeat this is a hard problem. Often with hard problems a group will end up with blinders on. I think this group is so worried about “Identity Proofing” that they can’t see that this is not the most important thing. It is important to know, but not important to constrain. Meaning for any certificate, the user of that certificate needs to be able to determine the identity-assurance-level.

So there needs to be a managed list of ‘trustable CA certificates’ – Not ‘Trusted’, but ‘Trustable’. The actual trust decision is not a central authority decision. The trust decision is really the sender and receiver of the message. All the infrastructure need only support that trust. So the qualities I can think of for this “Trustable CA Certificate” list:

a) Whatever CA Certificates are listed must be clearly identified as to WHY were they selected. The list of reasons why one might trust, is a growing list: Federal-PKI-Bridge, etc.

b) What certificate policy is used by that CA. This includes the Assurance level the CA issues at.

c) What e-mail address space are issued under this. E-mail addresses are made up of two parts separated by the “@” character. The first part is the unique identifier with in an assigning domain, the second part is the unique identifier of the domain. A CA really needs to be aligned with e-mail assigning domain. The reason why this is important is that the CA certificate is what is listed in the magic-list, and therefore it needs to be transparent about what should be considered legitimate identities issued by it. This is simply an indicator that this CA is the assigning-authority for identities issued in that domain.

d) When is this recommendation of this CA certificate going to expire. I would recommend it be short. Given that there is really not a good solution to revocation-checking for Root CA Certificates – although it can be done.

e) ….I am sure there are more and I expect those working on this are making good progress…

Given this list, does beg a question of how is this magic list of trustable CA certificates to be distributed? I can only point at the browser market, and hope they come up with a solution. They need automated distribution more than we do. We have a very manageable number of potential trustable CA certificates today. I recommend we wait on the scalability problem for a while. Yes this is the same short-cut the browser market chose. Yes a short-cut that needs an expiration.

More use-cases
This article is already long, but these other use-cases are important too:
I have already covered
1) I need to send a secure message to an e-mail address where I have already in-person proofed the identity.
2) I need to send a secure message to an e-mail address and I need there to be technical proof that the identity has been in-person proofed by someone trustable.
3) I have received a secure message, is it from someone with an identity that has a high assurance identity?
4) I have received a secure message, is it from someone I have already in-person proofed?

There are also some workflow use-cases that are prior to sending or post receiving. Like:
1) I need to find an address for a specific name. Display the identities found so that I can pick the right one. In this case the identities really need to be in-person proofed by someone else.
2) I have an e-mail address, what is the available information on that identity (aka the certificate content, but could also be the LDAP content).

The miss-use-cases are just as important to PREVENT
1) An attacker wants to have the content of your message
2) An attacker wants you to receive and accept their message

Conclusion
In all these cases the knowledge of any in-person proofing by the CA is important, but it is an attribute carried in the certificate. This attribute is used in workflows and/or as part of the ‘authorization’ decision. This is the critical step, as authorization decision is moving from “Trustable” to “Trusted”. Trusted is a decision of Authorization. Trusted is not a decision of Identification or even Authentication.

I do understand that some Healthcare Providers want to outsource this to their Full-Service-HISP. There are a vast number of them that really should do this. Trust is hard, and sometimes it is the right thing to outsource hard things. I outsource non-routine healthcare assessments to my Healthcare Provider, because that is hard. But I expect them to provide me choices and I ultimately make the decision based on their professional assessment. Sometimes there is only one choice and it gets done with little discussion (you need a lab test), sometimes the choices are vast.

I think the challenge is not in Identity-Proofing; it is in supporting reasonable decisions on a trustable list of CA certificates. NOT ‘the trusted list’, a ‘trustable list’

updated: to fix a sentence regarding secureDNS and secure LDAP.