Wednesday, June 27, 2012

Leap Second, yes it has security and privacy relevance


There is a leap second on June 30th. The security relevance is,  how will your software deal with this leapsecond. Will events that happend during the extra second be properly accounted for? will it be shown as 60 seconds, or will 59 show up for 2 seconds? -- the 'accountability' side of Security.

Will your timers handle a request to delay by 60 seconds, when there actually are 61? Will a deadlock occur? -- the 'availability' side of Security.

Will your software adjust the clock at all? Or will it be terminally behind a second, likely many seconds since we have had almost a half minute of leapseconds. This is what the GPS system does, rather than deal with the accounting mess.
of course on the other side of GMT they see it differently
and businesses care too
a good quality implementation of NTP will simply smooth the second out so that there never is simply a leapsecond, but rather a bunch of leap microseconds.
but not all time sync are that advanced
And...
----------------------------------
Update: July 2, 2012 -- Fantastic analysis done By Rob Horn. Not just what the problem was, but why we find ourselves in this strange space where this matters yet doesn't really matter.

Monday, June 25, 2012

Constructive comments on Data Segmentation for Privacy

The first draft of the S&I Framework -  Data Segmentation for PrivacyReference Implementation Guide comments are due today. As usual I did leave this exercise to the very end. I spent 10 hours on Sunday to review in fine detail providing constructive comments on everything from a simple typo to a fundamental mistake. I came up with 144 comments, 6 of which I consider major show-stoppers. These 6 are not hard to fix, but they are critical that the be fixed.

I have marked up the PDF and produced a 22 page extract of my 144 individual and detailed comments.

The 6 items can actually be summarized into THREE.

Use XD* family as the Document Level control, not CDA:
The draft today proposes that even for whole document control one must use CDA. This means that if you have a DICOM object, PDF document, text document, CCR, Blue-Button, or some form of workflow (such as XDW); that this object MUST be encapsulated inside of a CDA document. This fundamentally is a waste as the exact same functionality can be achieved simply through the use of the XD* family of transactions, using the rich XD* metadata. Indeed this seems to be the message except for specific sections of chapter 3.

Not only is it unnecessary to encapsulate everything in CDA, but you still MUST support XD* metadata as an external embodiment of the metadata. Let me explain this another way, if you only use CDA then you MUST open up the CDA document only to discover you should NOT. Hence why there is security layers built into the XD* family of profiles, that place the minimal but important metadata in the transaction where the access control service can prevent the opening of the CDA document without first invoking the proper controls.

The XD* mechanism is needed to define the whole document level control, and even if the CDA document contains section or entry controls, the XD* mechanism is still needed to convey the high-water-mark (highest confidentiality code contained within the content)

Thus we MUST define the XD* family mechanism anyway, so the additional functionality inside the CDA for document level control is free, and we enable entry level control.
  • Direct – shall use the Direct specified XDM attachment to carry document level controlling metadata
  • Exchange – shall use the XDR and XCA mechanisms to carry document level controlling metadata
  • HIE – shall use the XDS mechanisms to carry document level controlling metadata

There is concern within the community that the HIT Standards committee had recommended the CDA Header as the proper metadata, and my recommendation is consistent with this. Not in letter, as I disagree that the CDA header should be the primary mechanism, but in spirit as the XD* metadata is a purpose specific metadata that is highly influenced by the CDA header. The difference is that the XD* metadata is a proper metadata, whereas the CDA header is proper documentation.

Sensitivity coding is for Policy, not for communications
There are statements in Section 3.7.5 around sensitivity coding that are misguided and wrong. We have provided expert testimony in both healthcare and military-intelligence to express why this is a bad idea. It is true that the HIT Standards committee did include a recommendation along these lines, but they were misguided and wrong too; but they didn’t have the benefit of the expert testimony that we had. Therefore we should inform the HIT Standards committee that we have learned information that they didn’t know. We must not regressing and ignoring decades of advancement.

Sensitivity codes are needed, they  are needed in privacy policy rules as tags that identify the rules to apply to specific types of data. They need to exist in privacy policy rules to identify what types of data should be handled differently. They can even be used inside of a system in proprietary and non-exposed ways (inside the black box). But sensitivity codes are not appropriate as metadata on clinical content. The use of confidentialityCodes, which are larger chunks, are the appropriate and sufficient metadata.

Entry level functionality is NON-Standard and should be identified as a gap for HL7 CDA R3
The mechanism for providing entry level tagging in Section 3.6 is not standards based. To promote this method will forever force all systems to implement this non-standards based approach. It is true that the method leverages extensions built into the standard, but it is describing a mechanism that no tooling supports today, and would be very difficult to get tooling to support this mechanism. Further it leaves many things completely unspecified, as there is no underlying standards.

We should identify this as a gap for HL7 CDA R3 to resolve.
  
Conclusion
I was asked when I tweeted my progress Sunday afternoon, if I was coming up with typical number of comments for a review. At the time I thought that this was an excessive amount of comments, but looking back at them I must say that the text is in good shape. The majority of my comments are simple constructive change requests. Even the 6 (or 3) big issues are very easy to resolve. I don't think that my recommendation to resolving these big issues is controversial to anyone other than the politically connected. They are well founded in experience and international standards efforts. Yes I am passionate that they be fixed, as I am convinced without these fixes the result will be unusable. I have spent too much time on this project to have it fail due to politics and whim. 

Monday, June 11, 2012

What User Authentication to use?

This question is the first topic for the new RESTful-Health-Exchange (RHEx) workgroup that is starting under the S&I Framework 'affiliation'. I don't know what it means to be 'affiliated with S&I Framework, but it is clear from the way it is put in a different place that it is not like other workgroups. One thing is that they seem to be using google groups and doing more discussions in e-mail. I think this is a plus as it helps the group take care of simple discussions through e-mail. It also has a cool acronym.

Specifically the question I responded to was:
"What is the reasoning behind using OAuth and/or OpenID instead of PKI/certificates? While PKI is most certainly complex, it has proven to be much stronger technologically than both OAuth and OpenID."
There are many different solutions, proving the space is rich with imperfect solutions. Each potential solution has strengths and weaknesses.

PKI (Public Key Infrastructure) is the workhorse of Security technology, made up primarily of X.509 certificates and the infrastructure used to prove that certificate should be trusted. PKI is actually at the basis of most security technology, and thus almost everything can claim they are using PKI. What each of the other solutions do is try to move the hard-part of PKI further and further away, that is the management of certificates. To actually do PKI is very hard work, not because the technology is hard, but because the operational and management aspects are hard. PKI is the center of the Direct Project trust infrastructure, and it works really well for e-mail. But PKI for end-user-devices is much too hard for consumers to manage. See Healthcare use of X.509 and PKI is trust worthy when managed and SSL is not broken, Browser based PKI is.

SAML (Security Assertion Mark-up Language) is a wonderful technology for organization-to-organization user assertions. It supports more dynamic content, thus more able to capture the current security-context rather than just identity. It can be noted that PKI tried to do this with attribute-certificates; SAML is more simple to deal with and has advanced beyond what attribute-certificates could do. BUT, SAML is really heavy weight for internet consumers to use, or even some organizational use on the internet. The IHE XUA profile is a profile of SAML identity assertions.

OpenID is similar to SAML but much lighter weight than SAML, including only the needed capabilities that are typically needed for consumer authentication to web services. It can't quite do everything that SAML can, and is harder to fully support organization-to-organization federation of current transactional context. OpenID is very easy to use as a consumer, and a service that relies on it. OpenID is very well positioned to support mobile devices, and internet consumers on fully capable home machines. It is at the core of many common Internet web services.

OAuth is unique in that it is used to delegate authentication of one identity upon a service. This is very helpful for the types of service mashups that mobile, tablet, and Web 2.0 + envisions. Thus authorizing one internet facing service to act as if it was YOU when interacting with a different internet facing service. You see this often today when hitting a new service and they ask you if you want to use your Facebook or Google account rather than create a local account. (Some of these are actually using OpenID first, then OAuth).

I prefer SAML, but do agree that OAuth is magical. Each of these should be leveraged where they best fit, but none of them fit perfectly. Note that WS-Trust (and other things) can convert any security token into another token type; when you use a bridging service that lives in both domains.What this means is that we don't have to choose ONE technology. We can choose OAuth for applications, OpenID for consumers (Patients), while using SAML for organizational individuals (Providers, Clerks, Billing, etc).

The fantastic thing is that Healthcare is not in this quandary alone, all the industries using the Internet today are in this same position. This means that there are already solutions that offer ALL THE ABOVE. One example that I have looked at is:  Hybrid Auth -- http://hybridauth.sourceforge.net/index.html. This mans that we don't even need to choose when developing a RESTful interface, service, or application. We can leverage this open-source solution, no need to re-invent. In this way we can focus on what the healthcare industry needs to focus on, that is leveraging this technology.

The hard part is not choosing between these technologies. The hard part is the Policy and Operational choices of identity authorities (ask the Direct Project about this. Even where the technology is chosen one still will struggle with trust).

Tuesday, June 5, 2012

IHE ITI mHealth Profile - Public Comment

Updated August 2014 -- IHE is updating the MHD profile to align with FHIR (DocumentReference, DocumentManifest). Please refer to the IHE Wiki MHD Status page for current information. Also see the mHealth topic for updated blog articles.

Monday, June 4, 2012

Introduction to IHE Connectathon and Projectathon

There is a nice video that explains IHE, Interoperability, Connectathon, and how Europe - epSOS -  is extending the Connectathon concept to a Projectathon. A projectathon tests your project specific configurations (vocabulary, document types, workflows, etc) in the context of the IHE profiles working together.  This video is well worth the four minutes.