Friday, September 17, 2010

HL7 Ballot for September

There are a few HL7 ballots out this month that are relevent to Security and Privacy. This a rather light ballot compared to past, but those that are out there are well worth spending your time reviewing and commenting.

The following are the two ballot entries that have security or privacy aspects:

  • HL7 Clinical Context Management Specification Version 1.6 
    • This ballot proposes some new additions to CCOW to support setting and getting the "User" context with SAML Assertions. The expectation is that with this addition the Context Manager and other Participating Applications can have a higher assurance that the user was authenticated, they can know how the user was authenticated, they can know other attributes about the user, and they can potentially get proxy SAML Assertions based on the user authentication SAML Assertion. The benefit of this is that the those Applications that are participating in a CCOW context can get something more 'secure' than simply a username and stored password. Now they can get SAML Assertions so that when they talk to their backends or external HIE they can use SAML Assertions. This supports Healthcare use of Identity Federation
  • HL7 Version 3 Standard: Privacy, Access and Security Services (PASS) - Audit Services, Release 1 
    • This ballot takes IHE ATNA audit log message and uses it as the core audit log schema for a SOA based Audit Log Repository. The first service entrypoint is a submit audit record that recognizes as compliant the IHE ATNA transaction. The second service entrypoint creates a query service endpoint that allows for retrieval of audit log entries that meet the query request. A prime use-case for this service entrypoint is to retrieve all the audit log entries that would inform an Accounting of Disclosures.  This supports Accountability using ATNA Audit Controls

Voting and sign-up are ongoing for all active ballots in the September 2010 Ballot Cycle. Ballot pool signup will closes monday. The Ballot Opening Announcement on the Ballot Desktop does list all the active ballots. Postponed pools are indicated on the Ballot Desktop. You can view the full Ballot Opening Announcement here:

Ballot Sign-Up Close Date – September 20
The last day that voters can sign-up to take part in those pools open to sign-up is Monday, September 20 (end-of-day, midnight Eastern time). This is also the last day that non-members can sign-up for Non-Member Participation in this ballot cycle. We encourage everyone to sign up for any pools they are interested in as soon as possible. This way, if you should have any difficulties we can hopefully address them before the sign-up close date.

Ballot Voting Close Date – September 27
Voting for all pools closes on Monday, September 27 (end-of-day, midnight Eastern time).

Wednesday, September 15, 2010

ConfidentialityCode can't carry Obligations

I was approached with a privacy use-case that is common to bring up when explaining the deficiency of today's privacy standards. I don't disagree that there is a deficiency, but do want to explain how we get there. The use-case is one where the sender of data wants to express to the receiver of data specific obligations. For example, that the receiver must get the patient's authorization before using the data. What troubling is that this requirement often times is directed toward the confidentialityCode as being insufficient to carry this obligation.

The first problem we have is that healthcare decided to call their 'data classification' metadata 'Confidentiality Code', thus anything that is related to 'confidentiality' is naturally directed toward that metadata entry. This is not the role of the confidentialityCode (Data Classification), it is the role of either static policy or patient policy or transactional policy. The confidentialityCode needs to be simply a data classification that simply represents how sensitive the data inside the object (Document) is. If it contains anything more than itself exposes information. Hence why there are efforts to deprecate confidentialityByInfoType (ETH, HIV, PSY, SDV), as these are clear indicators of sensitive medical conditions. see: Data Classification - a key vector enabling rich Security and Privacy controls. Note that this same confidentialityCode is available on CDA, HL7 v3 messages, HL7 v2 messages, DICOM transactions, as well as the IHE XDS/XCA/XDR/XDM. It is usable at a few different scopes within these as well (whole document, object, section, etc). So it is important that we have a simple concept that is used the same way everywhere.

Static policy is the kind of policy that is built up through regulations and business agreements (DURSA). Static policy is the kind of thing that is normally seen as security policy, but sometimes business rules engines enforce some of these policies. I refer to these as ‘static’ because they don’t change often. They can include dynamic shifts in the context that have different results, for example break-glass. Static policy is not to be confused with unchanging policy. The main point of static policy is that these are the defaults when there is no patient policy or transactional policy. The static policies are generally the kinds of Role-Based-Access-Control rules. That explain (over simplified) which users have which roles that have which permissions and which permissions authorize access to which type-of-data. Where type-of-data can be tied to an object-type, table-row, table-column, etc or it can be a confidentialityCode value. In this way a user is authorized through their roles to have access or NOT to specific confidentialityCodes. So Dr Bob has access to NORMAL data but not Restricted. The static policies do need to explain how patient policy and transactional policy will be handled.

Patient policy is the kind of patient specific requests and/or agreements commonly spoken of as consents, this is where most of the standards work is right now. A Patient policy applies to all data that is specific to that patient. By corollary a patient policy does not apply to data not specific to that patient, so it doesn’t apply to other patients data or generalized business data or de-identified data. A Patient policy can be just as complex as a Static policy, it is just specific to that patient. A Patient policy needs to explain how it interacts with the static policies and transactional policies. A Patient policy needs to explain what exceptions are allowed as well, for example a patient policy should explain if the patient allows a break-glass exception. A specific example is in Australia where a patient opt-out is non-overrideable, there is no break-glass. Patient policy is also where specific authorizations occur, such as identifying a guardian; and where specific denials are as well, such as estranged spouse.

Policy hopping (my term): In simple form this is what the DURSA provides and what NHIN Direct relies on. It says that once a document source decides to release documents to a document consumer, that they are fully trusting the document consumer to do the right thing. The document source must make sure that they have the right things in place to release the document, but once it is released they have no further control over the document. If the sender is not comfortable with this, then they simply should not send the document. The document consumer is given no specific authorization or restrictions in the transaction; there might have been some out-of-band ‘transactional’ agreements but they are unknown to the transport. The document consumer must do what is right in their context, if their context (State rules) require them to get a consent before viewing the documents then this is what they need to do. In the NHIN the consent that the document source has for their purposes is not transferred to the document consumer. There might be a consent that controls the ‘discoverability’ and ‘releaseability’ of the documents, but it does not authorize the document consumer beyond that.

It is not well known that in the NHIN Exchange under XCA, and regional Health Information Exchanges using XDS; that the document does stay in the control of the sourcing ‘organization’. Specifically with XDS the Document Repository holds the document until the document consumer pulls it. This Document Repository actor can be fully managed by the sourcing organization and thus an access control decision can be made at the time that a document consumer pulls the document. It is true that a Document Repository can be shared among sourcing organizations, and that a Document Repository is not compelled to make access control decisions. But the Architecture supports fully distributed and federated control. Specifically with NHIN-Exchange (XCA) the document retrieval does have access control checks that would deny access if the access requested was incomplete or inappropriate. There are HIE architectures where the data is centralized and thus the document source loses control as soon as they publish.

The use-case given above is specifically looking for a transactional policy, that is a way to communicate the policy that applies to the content of the transaction. There is so much work that needs to be completed before we can describe the policy in enough detail to place it on a transaction. This is not to say that there is nothing close, the current consent standards (BPPC, and the soon to be final HL7 CDA Consent) can be delivered over transactions. But these are very coarse policies, nothing that would really be the kind of detail expressed on a transaction by transaction basis. A Transactional policy really needs to be more associated with the transaction itself, rather than as a component of the content. Transactional policies are not being worked on in healthcare specific space, but there is much work going on in general IT. I am involved in some of these non-healthcare standards organizations and hope that they will converge.

The confidentialityCode metadata value should not be overloaded to take on policy obligations; it should stay simply a data classification.

Tuesday, September 14, 2010

Meaningful Use takes Security Audit Logging back a decade

I am sure this is not intentional, but yet another example of poor attention to the details of a Requirement is going to hurt the advancement of Healthcare IT again. It appears that the Meaningful Use certification is going to take Security Audit Logging back a decade. I am reading clarifications from one of the certifying organizations.

First to recap what I said (highlighted in yellow) about  the final certification rules, specifically on 170.302(r)

  • §170.302 (r)  Audit log. 
    • (1)—Record actions. Record actions related to electronic health
      information in accordance with the standard specified in §170.210(b). 
      • See above §170.210 (a) - Encryption and decryption of electronic health information
    • (2) Generate audit log. Enable a user to generate an audit log for a specific time period and to sort entries in the audit log according to any of the elements specified in the standard at §170.210(b).
      • I read 'generate audit log' as 'create a report from the audit log'.
      • I am not sure all the elements are really that important to sort on.
      • Have the capability to produce reports based on the audit log

Generate audit log. Enable a user...:

Although the rule clearly says "Enable a user...", some certifying bodies have read this to mean that the EHR it-self must have the filtering, sorting and reporting built into the EHR. This means that any EHR that has followed our standards recommendations to offload the Security Audit Log to a Service will not be able to show compliance. 

I have nothing against an EHR having this reporting functionality built in; what I object to is that they are forcing this functionality to exist in all EHR. This is a fine functionality for a small organization, but as EHRs get connected to larger HIE and NHIN; the audit log will become very distributed. In order to get a full view one must be able to treat Security Audit Logging as a Service. See: Accountability using ATNA Audit Controls. The result is that a Meaningful Use organization will need to use 20 different tools with no way to bring all the audit logs together for analysis. This is putting an unnecessary burden on them.

I also understand that the certifying body will be testing that the EHR can sort on all '...the elements specified in the standard'. Even though it makes no sense to take 3 years worth of audit log and sort it by 'patient'. So you now need to scroll through millions of transactions to find the section where the patient you are interested in is. Wouldn't it make more sense to first 'filter' by patient, then sort? Yes it would make sense, but clearly why would a tester think that this is illogical. Just imaging taking 3 years worth of audit log data and sorting by 'time', not date and time, just time. All those things that happen at midnight at the top, those that happened a second earlier at the bottom. Logical, no, but will it be a test requirement?

Record actions. (and protect them)

I also understand that they really are going to require that the Audit Log be protected by SHA1 hashing. Now this is no surprise as the regulation text forces this understanding, and the comments enforces this crazy idea.  A SHA1 hash is not the best tool to use to assure that an Audit Log is authentic and not modified. A SHA1 hash can't protect against risks to confidentiality or availability. Isolation of the Audit Log in a Service that has strong Access Controls would be the SOA approach.

More to the point, what 'risk' would a SHA1 hash protect against? I think they assume 'all' integrity risks. It will not protect against a legitimate user abusing their rights. It does not protect against an Audit Log that is accessible by more users than it should. It does not protect the confidentiality of the audit log.

The IHE ATNA solution offered is to use the SYSLOG-TLS transport. This solution leverages a transport signature (TLS) to protect the audit log message between the creator of the message and the audit record repository. The expectation of IHE ATNA is that the Audit Record Repository 'functionally' protects the confidentiality, availability and integrity of the audit log. IHE ATNA does not get more specific as more specifics do not add any value, but rather restrict the solution space and thus lower value. There are many ways to protect the integrity of the audit log record, it should not be exclusively SHA1 (or any hashing algorithm).

Hopefully the EHR vendor will find in their database toolkit that they can turn on some settings to automatically calculate and check SHA1 hash on table entries. This will be a garbage-in garbage-out functionality; but it will be compliant.

I am proud of what I did as co-chair of the Security Workgroup in CCHIT back in 2004-2009. We definitely did make the Security of EHR products better. We started simple and pushed little by little with 'functional' criteria consistent with those specified in identified standards (ISO, CC, INCITS, NIST, etc). In the pre-Meaningful Use days the criteria allowed for either strong reporting functionality or export for external analysis. There are still very smart and good people in the CCHIT Security Workgroup, I hope that the CCHIT organization leverages their workgroup membership.

I am glad that Meaningful Use is pushing Healthcare IT to advance in clinical reporting and quality reporting areas, clearly it is not helping advance Security or Privacy.

Monday, September 13, 2010

Top O' the Summer

Some blogs have started to post a listing of the best blog posts of the week based on hit-count. Given that I don't post many articles each day I figured I would look back on the whole summer. The google analytics and blogger statistics did guide me, but I also included non-quantitative analysis. Mostly this is a listing of some articles that are more informative and should stand the test of time. I often re-use these articles when explaining Security or Privacy concepts.

  1. Meaningful Use Security Capabilities
  2. Healthcare use of Identity Federation
  3. Data Classification - a key vector enabling rich Access Controls
  4. Meaningful Use Certification issue with Encryption of data-at-rest 
  5. A Look into the UK breach statistics and by reference the USA breach statistics
  6. Accountability using ATNA Audit Controls 
  7. NHIN-Direct Privacy and Security Simplifying Assumptions 
  8. Stepping stones for Privacy Consent 
  9. Consumer Preferences and the Consumer  and Redaction and Clinical Documentation  
  10. Availability of Consent Documents and their rules

Notable from before:

Wednesday, September 1, 2010

Meaningful Use Certification issue with Encryption of data-at-rest

The final Meaningful Use certification criteria includes one security criteria that has caused much discussion. This discussion is healthy, but it is not resolving to a single understanding and potentially putting Meaningful Use qualification at risk. Most of the security requirements are easy to understand and I have outlined them in Meaningful Use Security Capabilities for Engineers.

The troubling requirement is:
  • §170.302 (u) General encryption. Encrypt and decrypt electronic health information in accordance with the standard specified in §170.210(a)(1), unless the Secretary determines that the use of such algorithm would pose a significant security risk for Certified EHR Technology. 
This seems to be easy enough to understand, although it does include the troubling "unless" clause. I am not going to focus on the "unless" clause here, as I have already ranted enough on that. Although this "unless" clause could be the solution to the problem. That is that the Secretary could resolve this lack of understanding.

The problem is not the selection of encryption algorithms. That is handled nicely in  §170.210(a)(1). The result is a set of encryption algorithms that are well implemented and well understood. Although it should be noted that most encryption schemes start with a Digital Certificate, use asymmetric encryption to cover the key(s) that are used with symmetric encrypt the bulk data. This is included in FIPS 140-2 Annex A, but too often people focus purely on AES (a fine symmetric encryption algorithm).

The problem is trying to figure out what the meaning of "electronic health information" is. Does this mean ALL electronic health information? Does this mean only the extracts identified for Meaningful Use?  The comments in the pretext of the regulation are not all that helpful. They simply keep reminding us that the EHR "must be capable of performing encryption". Does this mean that however the vendor provides for encryption is good enough? I suspect, not.

The one comment that leads me away from the EHR servers and toward the extract is the inclusion of NIST Special Publications (SP) 800-111. The scope of NIST SP 800-111 is "end user devices". This tells me that their worry is the laptops, tablets, smart phones, USB-Memory Sticks, CD, and DVD. This helps to scope the definition of "electronic health information" to those instances where this electronic health information appears on an end user device.

The NIST defined test procedures are not much help. They seem to be applicable to just about anything, thus leaving it up to the vendor to define what 'test data' is. (see Test Procedure for §170.302 (u) General Encryption. I am not sure what real functionality is being delivered when the vendor gets to define what the test will test. It is actually right to not further refine a requirement in the test procedure, but I had hope. This test procedure, could certainly be used to test that an extract of electronic health information intended to be saved onto a inherently portable device is indeed encrypted.

This also aligns nicely with the experience learned from A Look into the HHS Posts Data Breach Notifications. There are simply a huge number of breaches that are associated with inherently portable end user device. Had the data on these devices been encrypted then the data would not have been exposed.  I am not a fan of simplifying the solution to this problem as simply encryption, as there are other ways to protect end user devices. More to the point there are new risks introduced by encryption that need to be considered. But I will leave that discussion inside of NIST SP 800-111 where it already is covered nicely.

Off-the-shelf transparent encryption

Encryption of data-at-rest should not be seen as an EHR problem. There are many levels of abstraction that software developers use to separate functionality. Where a functionality is needed by many different applications, this functionality is pushed down into a lower level where it can be re-used. For example, no EHR includes code to handle interacting with the keyboard hardware. The EHR uses the functionality of the operating system to provide a reasonable set of abstract methods of interacting with the human. The EHR might have special interpretations of some key-sequences, but those same key-sequences could be provided by many different types of input. If this wasn't done then there would be much more work to get an EHR to work on a tablet computer that has no physical keyboard but rather a virtual simile.

This abstraction is done for many subsystems including things like USB-Memory sticks. To the EHR these simply look like another file-systems. The Healthcare Provider or Healthcare Provider Organization could choose a USB-Memory stick that automatically encrypted the contents. Quick survey of Amazon shows 72 different USB-Memory sticks that encrypt (e.g. IronKey).  The EHR would be unable to know, without proprietary means, that the data was indeed encrypted. In fact this solution is already available and in use today.

Another example is an encrypting hard-drive. There are many solutions that will transparently encrypt a hard drive. Some are hardware based, some are built into the operating system, some are built into the database manager, and some are add-on packages. All are available as solutions today and many are used today.

The problem here is that by getting the EHR vendor involved we end up with less choice. This is because the EHR must choose ONE solution that they are going to certify with. It is unlikely that the EHR vendor is going to certify 72 times for 72 different off-the-shelf transparent encrypting USB-Memory sticks, and dozens more times with different off-the-shelf transparent encrypting software. Thus the Healthcare Provider is forced to use that ONE choice because the rules of Meaningful Use require that the Provider must be using the certified EHR functionality.

Far better to recognize that this general encryption capability is abstracted below the EHR, and that the operational environment can already make these choices today. At minimal allow the EHR Vendor to claim a 'class of solution' where it is understood that they certified with a representative instance of an off-the-shelf transparent encryption solution. Forcing less choice is not a good idea.

Portable Standards

I will assert that this leaves only the question of how an EHR produces an encrypted data-set that is not using off-the-shelf transparent means? One way is to use industry-standards, such as encrypted-ZIP. This is a well known ZIP format that supports encryption with password or digital certificates. This is however not an open-standard.

We need open-standards for encrypting blobs of data at-rest in a way that is fully interoperable. The bad news is that there are no good solutions today. If there were than IHE would have included this as an option in the XDM profile. However there is movement.

The DICOM specification now includes support for encrypted portable media. This is mostly documented in Annex D of Part 15. They solved the problem by indicating that the standards used for secure email (S/MIME) can be used to create an encrypted file that is a MIME multi-part. This results in a single object that looks just like a single e-mail containing everything. Thus they take their portable media definition for using e-mail, and say that the e-mail can be seen as a portable-encrypted-file. Their portable media definition uses ZIP to preserve file-system.

The method that DICOM specified for portable media could be integrated into IHE XDM profile. Where the existing XDM file-system, that already has a ZIP format, would be encapsulated in the MIME multi-part, and encrypted using S/MIME methods. The result is a portable-encrypted-file that can be manipulate in many ways. Its not just for e-mail.

This movement is slow because the Off-the-shelf Transparent Encryption fills the need so well.