I got the answer to the question I have been hoping to ask. The question I asked is: What does ONC think they have specified with the three transports, with the options of (a)+(b) or (b)+(c)? For more detail on how this question comes about see my blog articles: Meaningful Use Stage 2 : Transports, Minimal Metadata, and Karen's Cross or just Minimal Metadata. Essentially I wanted to ask the question so that I can understand the desired state. Once I know the desired state then I can help get clarification.
The good news is that the ONC goal is a very reasonable and very progressive. The goal is to recognize that basic Direct transport is a good start, but that it is insufficient to support more complex and robust workflows. They recognize that there are many Exchanges using XDR/XDS/XCA including CCC, Connecticut, Texas, and NwHIN-Exchange. Thus they want to encourage, through naming specific options, EHR vendors to reach beyond the minimal e-mail transport down the pathway of a more robust interaction. I speak of this stepping stone approach in a couple of blog posts so I am happy about this. Stepping stone off of FAX to Secure-Email, What is the benefit of an HIE, and HIE using IHE.
I am not going to fully define Karen's Cross, but the whole Karen's Cross specification recognizes that although one might be able to take an inbound Direct message that is not in an XDM content package, and invent hard-coded metadata for the XDR outbound link; that if the inbound Direct message comes in with XDM content packaging then it comes in with exactly the metadata that the XDR outbound transaction needs. Note that an inbound XDR that needs to be converted to Direct is easy, the XDR content is just encapsulated in XDM and sent over Direct.
Which does remind everyone that Direct does require that you support receiving XDM content; which minimally means you have the ability to read a ZIP file, and display the INDEX.HTM file that will be on the root of the ZIP file. Really easy, and almost hard to make not work.
So what ONC wants to encourage is the sending of Direct messages with the XDM content packaging. This is a part of the Direct specification, but is in there with conditions that make it only required IF your sending system has the ability to send using the XDM content packaging. So, ONC came up with the first option [(a)+(b)] to encourage the use of XDM content packaging with the Direct specification.
The third option - what they call (b)+(c) - is trying to encourage the use of XDR with the specific secure SOAP stack. They could have done this with simply XDR+XUA+ATNA, because that is what the interoperability specificaiton turns out to equal. I would say that the Secure SOAP Transport -- (c) -- is indeed more constrained than ATNA and XUA; as it does force only TLS and has some vocabulary bindings. The advantage of using including the (c) as a standalone transport, sets the stage for future transports such as a secure HTTP/REST (e.g. RHEx); and by being just a secure SOAP stack they encourage experimentation with XCA/XCPD.
So we now know that the testing for (a)+(b) would be testing the Direct Transport using XDM content packaging; and the testing for (b)+(c) would be testing XDR with mutually-authenticated-TLS and XUA user assertions. This is a good path that I can help ONC get clarified. The diagram above is originally from EHRA and has been used by IHE. I have co-written the IHE white paper on this topic and presented along with Karen Witting. This is a fantastic result, now the hard part of getting ONC written FAQ; and test plans that hit the right things.
Discussions of Interoperability Exchange, Privacy, and Security in Healthcare by John Moehrke - CyberPrivacy. Topics: Health Information Exchange, Document Exchange XDS/XCA/MHD, mHealth, Meaningful Use, Direct, Patient Identity, Provider Directories, FHIR, Consent, Access Control, Audit Control, Accounting of Disclosures, Identity, Authorization, Authentication, Encryption, Digital Signatures, Transport/Media Security, De-Identification, Pseudonymization, Anonymization, and AI Transparency.
Sunday, September 9, 2012
Friday, September 7, 2012
MU2 Wave 1 of Draft Test Procedures -- Integrity Problem
The first wave of Draft Test procedures is out:
For more information, and the Wave One 2014 Edition draft Test Procedures, please visit http://www.healthit.gov/policy-researchers-implementers/2014-edition-draft-test-procedures
This is an opportunity to see if the interpretation you have of the Final Meaningful Use Stage 2 rules as the Testers have. I looked at three of the test procedures that fall into my scope.
- §170.314(d)(5) Automatic log-off Test Procedure
- I think they correctly changed this to reflect the various ways that are used and are appropriate. It will be interesting to see specific types of EHR technology against this procedure, it is possible someone might still be confused.
- §170.314(d)(8) Integrity Test Procedure
- I think they are way off base, or too aggressively focused on the detail and loosing sight of the overall. They continue to have the language in their test procedure that have caused me to write my most popular article of all times "Meaningful Use Encryption - passing the tests". I am not happy about that article, but it gets to the point. The requirement for Integrity just like the requirement for Encryption is there to assure that where ever Integrity or Encryption technologies are utilized that legitimate and approved algorithms are used. Quite often this is next to impossible to prove. The best way to prove these is where interoperability protocols are used. The Direct Project, and the Secure SOAP Transport have these algorithms built in. So, testing these for interoperability will have the affect of testing the Integrity and Encryption lines. Thus a standalone procedure should focus ONLY on uses of Hashing or Encryption that is other than specified in the Transports section. Which nothing but Transports are required. Thus this procedure should start with "The EHR vendor shall identify how they utilize Integrity other than through defined Transports.". And then focus the testing on those. This is not going to make it easy, as the place where this is going to happen is transparently in Databases, and Data-At-Rest. Thus there is nothing that the EHR vendor can possibly show. I think this item should be… Not Tested outside of integrated as part of Transport.
- §170.314(d)(9) Optional—accounting of disclosures Test Procedure
- I think they got this one in good shape too. It now is clear that the interpretation of this optional criteria is a User Interface where the user can indicate that a "Disclosure" has happened. Thus this is not any automated accounting, but does provide for a way to identify disclosures using readily available technology at the fingertips of those that might be involved in a legitimate disclosure. The test procedure seems reasonable as well.
Thursday, September 6, 2012
On The Meaningful Use Stage 2 Rules
I have written many articles on the US Centric - Meaningful Use Stage 2. Some are very deep analysis of specific problems. It may seem that all I have to say is negative and nitpicky things. I want to make it clear that I am very happy with how Stage 2 came out. I just find that others have done a fantastic job of outlining the rules, so I have focused only on deep analysis in my area of expertise where I see potential confusion.
First, the ONC and CMS summaries are great stuff. They do leave out details that end up being the subject of my blog articles. But they are really good summaries. Keith has clarified much as well. I have seen plenty other summaries, I wish I had a catalog. There is just no good reason for me to pile on.
Meaningful Use:
First, the ONC and CMS summaries are great stuff. They do leave out details that end up being the subject of my blog articles. But they are really good summaries. Keith has clarified much as well. I have seen plenty other summaries, I wish I had a catalog. There is just no good reason for me to pile on.
Meaningful Use:
- Stage 2 Final
- Meaningful Use Stage 2 : Transports
- This is a summary of my views on Transport including topics on what is mandatory, what is not; and why might you still want to certify to optional transports.
- I also cover Encryption and Hashing
- Meaningful Use Stage 2 - Audit Logging - Privacy and Security
- This is a detailed analysis of the Audit Logging requirements. I explain the ASTM E2137, so that you don’t have to read it, while encouraging you to read it. I explain how you can go above and beyond and have happy Providers and Patients. Audit Logging is a thing best unseen, but when it is needed it better be good stuff.
- Minimal Metadata
- Minimal Metadata is a view on Healthcare Metadata, that is specific to PUSH use-cases. The good news is that this has been adopted by IHE, the better news is it is just logical good. This doesn’t mean that Healthcare Metadata is unneeded, far from it. The reason why you need well-defined places to put metadata is to assure communications.
- Karen's Cross or just Minimal Metadata
- Transport (b) is confusing, even to me and I was there when it was written. This gets especially difficult when the MU2 rules require that when certifiying to optional transports one MUST always include (b). I really don’t know how this works. Transport (b) is to Healthcare Transports, as the trucking-depot is to global shipping containers. They are critical for moving content from one transport to another, but there need only be a few of them at locations where shipping meets train meets trucking meets barge.
- Stepping stone off of FAX to Secure-Email
- What is Direct and why is it important and what is next. Knowing why you are doing something is important, knowing what is over the horizon is also important.
- Encryption is like Penicillin
- Not really written regarding Stage 2, but relevant. Just encrypting everything is not an answer, it is security theater. Risk Assessment make sit real.
- Stage 2 NPRM
- Meaningful Use Stage 2 seems to support Security, Privacy, and HIE Transport
- The majority of this is still true. Most of my complaints have been fixed. The good news is that we now have fundamental Security, Transport and PRIVACY!
- Meaningful Use Stage 2 FINALLY means Secure and Privacy Protecting
- This is a deeper analysis of the NPRM, again most of my complaints have been fixed.
- Meaningful Use Stage 2 -- 170.202 Transport
- Details on the Transports. Minor fixes were done, but also the Karen’s Cross damage
- Predicting Meaningful Use Stage 2 Security
- I nailed it… Yes, it helps to be on the committee doing the Federal Advising. There is still very useful information here.,
- Stage 1
- Meaningful Use Security Capabilities for Engineers
- Still useful for those items that didn’t change, and there were quite a few. Most just got important but minor fixes.
- Meaningful Use Encryption - passing the tests
- I hope this one NEVER NEEDS TO BE SEEN AGAIN.
- Meaningful Use clearly does not mean Secure Use
- I can say that this one HAS BEEN RESOLVED. Stage 2 is much better. Nothing beats a Security Risk Assessment, and HIPAA/HITECH still requires it.
- How to apply Risk Assessment to get your Security and Privacy and Security requirements
Tuesday, September 4, 2012
Meaningful Use Stage 2 : Transports
There are two perspectives
1) The Transport standard is clear. It is Direct. Everything else said in the regulation about transports is optional and therefore meaningless.
2) There is still the problem of the (b) Transport, which pulls in (b)+(c) and also (a)+(b)
Simple View – Minimum work to get Certified
There is no question what is minimally required, it is the Direct Project. So, test to this and be done. This is the easiest way to get through certification and get your CEHRT stamp.
Note that just because the Direct Project is the only required Transport, does not mean that it is the only Transport that can be used. I heard Steve Posnack reiterate this on many of the webinars. The CMS rule doesn't differentiate between transports, it is more concerned with content and outcomes (AS IT SHOULD BE).
The Direct Project is a pointed message, but not necessarily without issues, here are a bunch of my blogs
Lets just say that you don't like doing Minimum work for Certification, or that you want to go above-and-beyond, or that your EHR is so far from supporting Direct that you need alternatives. On the last topic, sorry but you must support Direct. The problems with the Transports other than Direct is simply, confusion.
I cover these problems in detail in Karen's Cross or just Minimal Metadata. The Diagram at the right is informative, it is Karen's Cross. The GREEN arrows are the (a) transport, Direct. The BLACK arrows are the (c) transport, Secure SOAP. The RED arrows are the (b) transport, a proxy service that converts (a) to (c) and (c) to (a). The (b) transport is not a transport and thus can’t be called upon as a transport. Add to that that the MU2 specification always grouped the (b) transport with either (a) or (c) and one has something that simply doesn't compute. I guessed that ONC really just wants the Minimal Metadata, but am not sure. I think they are actually asking for CEHRT to somehow certify that they can work in an operational environment where someone else provides the (b) proxy service. The use of the (b) proxy service is an operational aspect that should have been placed upon the CMS side.
All of the Transports include fully security as part of the specification, so they are by definition already compliant with the Encryption and Hashing requirements.
The transport to certify is clear, just get Direct done somehow. If you can’t do direct, then you are going to struggle with trying to figure out what is going to be required of you. The Test Tools will likely answer this eventually. There certainly is nothing clear today to start developing toward. Stick with XDR, which is a subset of XDS. This solution is highly reusable.
1) The Transport standard is clear. It is Direct. Everything else said in the regulation about transports is optional and therefore meaningless.
2) There is still the problem of the (b) Transport, which pulls in (b)+(c) and also (a)+(b)
Simple View – Minimum work to get Certified
There is no question what is minimally required, it is the Direct Project. So, test to this and be done. This is the easiest way to get through certification and get your CEHRT stamp.
Note that just because the Direct Project is the only required Transport, does not mean that it is the only Transport that can be used. I heard Steve Posnack reiterate this on many of the webinars. The CMS rule doesn't differentiate between transports, it is more concerned with content and outcomes (AS IT SHOULD BE).
The Direct Project is a pointed message, but not necessarily without issues, here are a bunch of my blogs
- Minimal Metadata
- Direct addresses- Trusted vs Trustable
- Implementation Guidelines for State HIE Grantees on Direct Infrastructure & Security/Trust Measures for Interoperability
- Can Direct messages be "delegated/forwarded?"
- Testing your XDM implementation
- Trusting e-Mail
Lets just say that you don't like doing Minimum work for Certification, or that you want to go above-and-beyond, or that your EHR is so far from supporting Direct that you need alternatives. On the last topic, sorry but you must support Direct. The problems with the Transports other than Direct is simply, confusion.
I cover these problems in detail in Karen's Cross or just Minimal Metadata. The Diagram at the right is informative, it is Karen's Cross. The GREEN arrows are the (a) transport, Direct. The BLACK arrows are the (c) transport, Secure SOAP. The RED arrows are the (b) transport, a proxy service that converts (a) to (c) and (c) to (a). The (b) transport is not a transport and thus can’t be called upon as a transport. Add to that that the MU2 specification always grouped the (b) transport with either (a) or (c) and one has something that simply doesn't compute. I guessed that ONC really just wants the Minimal Metadata, but am not sure. I think they are actually asking for CEHRT to somehow certify that they can work in an operational environment where someone else provides the (b) proxy service. The use of the (b) proxy service is an operational aspect that should have been placed upon the CMS side.
Besides the Karen's Cross or just Minimal Metadata issue, one can just look at the (c) transport and treat it as I outlined in Meaningful Use Stage 2 -- 170.202 Transport. Essentially the Secure SOAP stack is simply the lower half of all of the SOAP based profiles found in IHE. ONC has chosen to chop horizontally, where IHE builds vertically. This is shown in the eye chart to the left, which is not intended to be readable. Either way you slice it you have a secure SOAP transport stack that is carrying some SOAP content.
Thus it matters little if you use any of the Data Sharing profiles from IHE (XDR, XDS, XCA) or the Patient Management profiles from IHE (XCPD, PIXv3, PDQv3). What does matter is that you MUST be using ATNA secure communications, and XUA user assertions. YES the IHE profiles are the parent of the NwHIN-Exchange specification and are compatible. It is not that I work hard to propagate my view of the world, I work hard to keep divergence from happening when it is not necessary. I am very willing to entertain necessary divergence, and have lots of evidence that I support Direct.
But what about Encryption and Hashing?The MU2 requirement gets specific about Encryption and Hashing, but don’t worry.
Thus it matters little if you use any of the Data Sharing profiles from IHE (XDR, XDS, XCA) or the Patient Management profiles from IHE (XCPD, PIXv3, PDQv3). What does matter is that you MUST be using ATNA secure communications, and XUA user assertions. YES the IHE profiles are the parent of the NwHIN-Exchange specification and are compatible. It is not that I work hard to propagate my view of the world, I work hard to keep divergence from happening when it is not necessary. I am very willing to entertain necessary divergence, and have lots of evidence that I support Direct.
But what about Encryption and Hashing?The MU2 requirement gets specific about Encryption and Hashing, but don’t worry.
§170.210(f) Encryption and hashing of electronic health information. Any encryption and hashing algorithm identified by the National Institute of Standards and Technology (NIST) as an approved security function in Annex A of the FIPS Publication 140-2 (incorporated by reference in § 170.299).This Encryption and Hashing requirement is important but not hard to meet. The important part is that proprietary encryption is unacceptable, old encryption algorithms are unacceptable. Modern encryption (AES and SHA) are acceptable. The use of FIPS Publication 140-2 allows HHS and CMS to benefit from the intelligence community assessment of cryptographic algorithms, thus moving up automatically when the intelligence community does. The use of Annex A rather than the core FIPS 140-2 specification allows for relaxed rules around certification, this doesn’t change the technical aspect but it does greatly reduce the invasive code inspection requirements of actual FIPS certification. The Annex A is very short, 6 pages long. The summary: Encryption AES or 3DES; Hashing SHA1, or higher;
All of the Transports include fully security as part of the specification, so they are by definition already compliant with the Encryption and Hashing requirements.
- Direct – S/MIME authenticated using X.509 Certificates/Private Keys, Encrypted with AES128 or AES256, and Hashed with SHA1 or SHA256.
- Secure SOAP –secured with Mutual-Authenticated-TLS using X.509 Certificates/Private Keys, Encrypted with AES, and hashed with HMAC-SHA1, for more details see: Moving to SHA256 with TLS requires an upgrade.
- Secure SOAP – End-to-End - This is in IHE ATNA, but not in MU2 – There is an option to use WS-Security end-to-end security, but this requires also an update of common SOAP stacks and is administratively harder to achieve. Risk Assessment needs to drive the cost benefit.
- Secure HL7 v2 – There is no mention of this dirty little secret, but all of those HL7 v2 requirements in the regulation would also need to meet the Encryption and Hashing requirement. The solution here is to use the Mutual-Authenticated-TLS as is used in the Secure SOAP stack. Many toolkits support this, but not all of them. At IHE Connectathon we run into people who have forgotten to test this, they usually get going quickly.
- Patient Engagement - Secure Messaging – There is no guidance on what Secure Messaging is, and I think this is the right solution. But whatever is used for Secure Messaging must also meet the § 170.210(f) requirements. Given that the requirements are just focused on Encryption and Hashing; this is easily met with a typical HTTPS web-portal.
- Data at Rest – End-user device encryption. -- Okay this isn't a transport, but whatever solution used to protect data at rest, it must also meet the Encryption and Hashing requirements. A good commercial solution or even the solutions built into operating systems cover this. What they don’t cover is KEY MANAGEMENT. If you don’t protect the key then it doesn't matter how well encrypted.
The transport to certify is clear, just get Direct done somehow. If you can’t do direct, then you are going to struggle with trying to figure out what is going to be required of you. The Test Tools will likely answer this eventually. There certainly is nothing clear today to start developing toward. Stick with XDR, which is a subset of XDS. This solution is highly reusable.
Four years of blogging
So what? I was pushed into blogging because I found that I was explaining things over and over. The sad part is that I still tell the same stories over and over, hopefully to new people. I hope that each time I add something and truly am communicating and educating. I know that I am learning as I go, so I suspect the articles get better.
In september of 2009 I posted these articles. I am not excited that this list could be produced today. I hope that I am making progress:
In september of 2009 I posted these articles. I am not excited that this list could be produced today. I hope that I am making progress:
- Health Data Breach Rules - Started but not enforce...
- IHE Releases White Paper on Access Control
- Kerberos required in 2011 then forbidden in 2013
- Encryption now Mandatory
- Groups give Obama high grade for medical privacy
- HIT Standards - Meaning of S&P selections
- HIT Standards Committee Recommendations Public Inp...
- HITSP August 2009 face-to-face -- Security, Privac...
- HITSP Consumer Preferences Tiger Team, SPI and Con...
I have a "Topics" button on my blog that contains pointers to the most useful articles from my blog, arranged by topic. I keep that up-to-date. Here it is as of today:
Security/Privacy Bloginar: IHE - Privacy and Security Profiles - Introduction
User Identity and Authentication
Access Control (Consent enforcement)
Other
User Identity and Authentication
- Direct addresses- Trusted vs Trustable
- Identity - - Proofing
- The Emperor has no clothes - De-Identification and User Provisioning
- What User Authentication to use?
- IHE - Privacy and Security Profiles - Enterprise User Authentication
- IHE - Privacy and Security Profiles - Cross-Enterprise User Assertion
- Healthcare use of Identity Federation
- Federated ID is not a universal ID
- Separation of Layers: Security Error Codes
- Authentication and Level of Assurance
- A broadly usable HIE Directory
- Healthcare Provider Discoverability and building Trust
- Healthcare Provider Directories Profile
- Healthcare Provider Directories -- Lets be Careful
- Policy Enforcing XDS Registry
- Healthcare Metadata
- Texas HIE Consent Management System Design
- IHE - Privacy and Security Profiles - Access Control
- Data Classification - a key vector enabling rich Security and Privacy controls
- Healthcare Access Controls standards landscape
- Handling the obligation to prohibit Re-disclosure
- Access Controls: Policies --> Attributes --> Implementation
- Patient Data in the Audit Log
- IHE - Privacy and Security Profiles - Audit Trail and Node Authentication
- Accountability using ATNA Audit Controls
- ATNA and Accounting of Disclosures
- ATNA audit log recording of Query transactions
- How granular does an EHR Security Audit Log need to be?
- Document Submission: Audit requirements under error conditions
- ATNA + SYSLOG is good enough
- Direct addresses- Trusted vs Trustable
- Identity - - Proofing
- Securing RESTful services
- Healthcare use of X.509 and PKI is trust worthy when managed
- SSL is not broken, Browser based PKI is
- Meaningful Use Stage 2 :: SHA-1 vs SHA-2
- Trusting e-Mail
- S/MIME vs TLS -- Two great solutions for different architectures
- Healthcare Provider Discoverability and building Trust
- Using both Document Encryption and Document Signature
- Document Encryption
- IHE - Privacy and Security Profiles - Document Digital Signature
- Signing CDA Documents
- Using both Document Encryption and Document Signature
- Non-Repudiation is a very old art
- The Emperor has no clothes - De-Identification and User Provisioning
- De-Identification is highly contextual
- Redaction and Clinical Documentation
- IEC 80001 - Risk Assessment to be used when putting a Medical Device onto a Network
- More Webinars on Basics of IEC 80001
- IEC 80001 - Security Technical Report presentation
- How to Write Secure Interoperability Standards
- How to apply Risk Assessment to get your Security and Privacy and Security requirements
- Healthcare Metadata
- Minimal Metadata
- What is the benefit of an HIE
- Karen's Cross or just Minimal Metadata
- HIE using IHE
- Texas HIE Consent Management System Design
- The French Health Information Systems Interoperability Framework -- Now available in English
- One Metadata Model - Many Deployment Architectures
- Critical aspects of Documents vs Messages or Elements
- Using both Document Encryption and Document Signature
- Document Encryption
- XDS/XCA testing of Vocabulary Enforcement
- Where in the World is CDA and XDS?
- Universal Health ID -- Enable Privacy
- HIE/HIO Governance, Policies, and Consents
- Stage 2 Final
- Meaningful Use Stage 2 - Audit Logging - Privacy and Security
- Minimal Metadata
- Karen's Cross or just Minimal Metadata
- Stage 2 NRM
- Meaningful Use Stage 2 seems to support Security, Privacy, and HIE Transport
- Meaningful Use Stage 2 FINALLY means Secure and Privacy Protecting
- Stepping stone off of FAX to Secure-Email
- Meaningful Use Stage 2 -- 170.202 Transport
- Predicting Meaningful Use Stage 2 Security
- Stage 1
- Patient Identity Matching
- The Basics of Cross-Community Patient Discovery (XCPD)
- NwHIN-Exchange use of XCPD for Patient Discovery
- Direct addresses- Trusted vs Trustable
- Karen's Cross or just Minimal Metadata
- Minimal Metadata
- Direct addresses- Trusted vs Trustable
- Implementation Guidelines for State HIE Grantees on Direct Infrastructure & Security/Trust Measures for Interoperability
- Can Direct messages be "delegated/forwarded?"
- Testing your XDM implementation
- Trusting e-Mail
Other
- Encryption is like Penicillin
- Healthcare is not secure - trust suffers
- Creating and using Unique ID - UUID - OID
- Distributed Active Backup of Health Record
- Workflow Automation Among Multiple Care-Providing Institutions
- Effective Standards Evaluation - Guest blog from Karen
- Are Documents Dead?
Monday, September 3, 2012
Meaningful Use Stage 2 - Audit Logging - Privacy and Security
Updated August, 2014 -- It seems people are still reading this post. I am simply adding a forward pointer to the Audit Logging Topic where you might find fresher blog articles.
Saturday, September 1, 2012
Direct addresses- Trusted vs Trustable
I cover the Identity - - Proofing problem that Direct is having right now. Some are arguing that Identity Proofing is not needed, I argue that it is always needed but sometimes the proofing is done in a distributed way sometimes it is centrally done. The Identity-Proofing is not the critical thing that is needed. The identity proofing is just an example of people confusing Trusted with Trustable. Trustable is what we should be focused on. Trustable is associated with Identity and Authentication. Trusted is a result of a determination of Authorization. Identity-Proofing status is important to carry in the Identity, but it is just an attribute. It is one of many attributes in the Certificate that is used to determine if this certificate ‘is the one’ – that I should Trust.
The Deep Dive
This brings up the question of what are the important use-cases, and exactly what is the RISK that the certificate system in Direct needs to solve.
This is easiest seen if I start with the controversial use-case. The one most likely to be used by General Provider with their Patients. In this case the GP knows who their patients are, at least well enough to help them get healthy. Thus the GP really has done a Proofing step. In this case the Patient has a conversation with their GP and tells them their PHR e-mail address (e.g. HealthVault account e-mail address). The GP puts this e-mail address into his Direct solution for sending secure e-mail and documents. Note that I will show that the case where the sender didn’t do in-person-proofing is a super-set with a little additional requirements, and no less requirements.
The Risk
In this case when the GP sends something he needs the infrastructure to make sure that the certificate that is discovered is truly the certificate for the e-mail address given. That is a malicious individual, the attacker, wants the content of the Direct message. In this case the attacker needs to get the sender to send the secured e-mail with the encrypted envelop targeting a certificate that the malicious individual has the private key for. The Direct project uses really good security, both signed and encrypted e-mail using really good crypto algorithms. Thus the attacker must become what is seen as the intended recipient. Let me explain:
In non-secure e-mail (normal e-mail) this is rather easy, you just attack the DNS system and return faster your mail-server address as the location for the e-mail to be sent (the DNS - MX Record Lookup, or DNS server address lookup). If you do this with a S/MIME protected message you will get nothing but the encrypted message. If poor encryption is used, you can beat on that e-mail message until you crack it, but if we choose good algorithms like then this effort takes too long to be useful, and Direct specified strong algorithms. So we have this risk handled, through picking an end-to-end solution like S/MIME and good algorithms. And we leave the transport open for best flexibility, flexibility that is really useful for redundancy and robustness. As strange as it might seem, we simply live with this risk because it has a very-low impact. See How to apply Risk Assessment to get your Security and Privacy and Security requirements
The attacker wants access to the content of the secure e-mail, so they need to attack DNS again, but this time get it to return fast with a certificate that the attacker wants. If the sender system doesn’t check this certificate, but just sends to it; then the attacker is back at non-secure e-mail and has full use of the content sent. The Direct Project requires checking of the certificate that I will outline below, so compliance to the specification is important. Note that the certificate checking is mostly what all systems do for checking certificates.
There are secureDNS and secure LDAP solutions. They are not recommend because there is a very robust system to validate Certificates including discovering the assigning CA and checked for revocation. This system is preferable because it allows for far more flexible DNS and LDAP configurations; but mostly because it is one system that works equally well regardless of how you got the certificate (which means we could add new ways to get the certificate and it would still work, such as from a previous secure e-mail conversation). Note that if you use SecureDNS or secureLDAP there is little overhead, and if it works for you then there is no harm in doing the verification multiple ways. SecureDNS might just mature enough for us to rely on it, but it is simply not needed for the Direct need.
The sending infrastructure will ask the whole worlds DNS and the whole worlds LDAP for a certificate that claims to be ‘the’ certificate for the e-mail address that the patient provided. The GP is very confident that the e-mail address is right, and besides if it wrong then it is the Patient that deceived. But back to the problem. Let’s just say that 10 certificates claim to be ‘the one’, how does the infrastructure know which one to choose. It can’t be the FIRST certificate to come back, as that just means the attacker must just get their response back first, which isn’t hard. The sending infrastructure must thus not stop looking after receiving one response. It must wait a reasonable time for all potential candidates. During this time it can be ‘validating’ the certificates that have arrived.
Note that multiple responses is not an indication of an attack. There are good reasons for multiple certificate: The most likely is: As your certificate approaches expiration but before it does expire, you need to get a new certificate issued. It is important that the expiration times do overlap to allow for latency in the system, yes many months worth of latency is needed. Once you have your new certificate you need to send both your old and your new certificate when a DNS or LDAP request comes in, and thus both are valid. In this case, the sender should choose the ‘newest’, for best results. There is really no sending reason to select the oldest, however there are signature validation reasons why you might need the old one.
So the RISK is exposure of content because the certificate you use to encrypt the e-mail is not the legitimate certificate for the e-mail address you want to send to. Note that this risk was known, and is THE risk to be resolved in the original Direct Project threat assessment.
The solution
I will repeat that in this case we only want to make sure that the certificate is the legitimate certificate for the e-mail address we have. In this case we don’t care about proofing. Indeed even if I had started with a use-case where the certificate needed to include proof of high-assurance proofing, all these steps are needed. Thus the main problem is NOT the assurance, it is the legitimacy.
First we must do cryptographic checking. Doing this on a certificate is easy, it is available in many toolkits and operating systems: Is the cert signed, is the expiration time still valid, is the chain to a trusted CA, is the cert not revoked. The cryptographic part is well known, yes one must be using mature algorithms and key lengths. Please don’t allow RSA 1024 or MD5 hash. Look to FIPS 140-2 for guidance, and bump it up. In this case future-proofing is cheap and certificates tend to be around a long time. What is not solved yet is the list of CA certificates that I trust to issue legitimate certificates. Let me defer discussing where this magic-list comes from until later.
If I have cryptographically tested the certificate, then I know that the content of the certificate is valid. The next couple of steps will be looking at the attributes inside the certificate and making more decisions. The following checks are important for all use-cases. I want to make sure the certificate claims to be for the e-mail address that I want to send to. This is needed by all sending use-cases. There are some short-cuts going on here, I don’t like them but we just need to deal with these short-cuts in the future.
Further, I need the root CA to be claiming that it issues certificates for the domain of e-mail addresses. This is likely just to the domain-name part of the e-mail address. I will show later in the magic-list why this is important. A certificate issued in violation of the CA policy needs to be reported as a potential indicator of a compromised CA. The checking of the e-mail address and the CA domain issuing is important as it stops false certificates.
I also need to make sure that the certificate is one issued for the purposes of S/MIME e-mail encryption. This is another reason why I might get multiple legitimate certificates. Some of the certificates might be constrained for just digital-signature.
Identity Proofing:
It is only here that one cares if the identity in the certificate claims to be of a specific Identity Proofing Assurance level. And the use of this knowledge is not security-layer use, it is application logic. If the sender has done in-person proofing; then any identity assurance level is fine. It is only if the use-case has not done an in-person proofing that the infrastructure should utilize the Certificate claim of Identity Proofing level. Yup, this all I need say.
Magic-list
Now comes the hard part, how do you get the magic list of trusted CA certificates? You start from some magic list of trustable CA certificates and make some local decisions. You might outsource this local decision making to your Full-Service-HISP, but it better be transparent between you and your HISP.
Identifying the list of trustable CA Certificates is a hard problem, and done wrong can be really wrong. See the mess that has happened in the Web-Browser world where they took a specific short-cut that eventually caught up to them. A short-cut that seemed right at the time, and I would agree that it was the right short-cut AT THE TIME. The problem is that once you have taken a short-cut, it becomes the de facto pathway and no one ever challenges that short-cut. This one should have been challenged and replaced in the past 5 years. So, if we take a short-cut; please put an expiration on that short-cut.
Identifying the list of trustable CA Certificates is the space that DirectTrust is trying to fill. How does someone choose the list of CA certificates that they are going to trust? I will repeat this is a hard problem. Often with hard problems a group will end up with blinders on. I think this group is so worried about “Identity Proofing” that they can’t see that this is not the most important thing. It is important to know, but not important to constrain. Meaning for any certificate, the user of that certificate needs to be able to determine the identity-assurance-level.
So there needs to be a managed list of ‘trustable CA certificates’ – Not ‘Trusted’, but ‘Trustable’. The actual trust decision is not a central authority decision. The trust decision is really the sender and receiver of the message. All the infrastructure need only support that trust. So the qualities I can think of for this “Trustable CA Certificate” list:
a) Whatever CA Certificates are listed must be clearly identified as to WHY were they selected. The list of reasons why one might trust, is a growing list: Federal-PKI-Bridge, etc.
b) What certificate policy is used by that CA. This includes the Assurance level the CA issues at.
c) What e-mail address space are issued under this. E-mail addresses are made up of two parts separated by the “@” character. The first part is the unique identifier with in an assigning domain, the second part is the unique identifier of the domain. A CA really needs to be aligned with e-mail assigning domain. The reason why this is important is that the CA certificate is what is listed in the magic-list, and therefore it needs to be transparent about what should be considered legitimate identities issued by it. This is simply an indicator that this CA is the assigning-authority for identities issued in that domain.
d) When is this recommendation of this CA certificate going to expire. I would recommend it be short. Given that there is really not a good solution to revocation-checking for Root CA Certificates – although it can be done.
e) ….I am sure there are more and I expect those working on this are making good progress…
Given this list, does beg a question of how is this magic list of trustable CA certificates to be distributed? I can only point at the browser market, and hope they come up with a solution. They need automated distribution more than we do. We have a very manageable number of potential trustable CA certificates today. I recommend we wait on the scalability problem for a while. Yes this is the same short-cut the browser market chose. Yes a short-cut that needs an expiration.
More use-cases
This article is already long, but these other use-cases are important too:
I have already covered
1) I need to send a secure message to an e-mail address where I have already in-person proofed the identity.
2) I need to send a secure message to an e-mail address and I need there to be technical proof that the identity has been in-person proofed by someone trustable.
3) I have received a secure message, is it from someone with an identity that has a high assurance identity?
4) I have received a secure message, is it from someone I have already in-person proofed?
There are also some workflow use-cases that are prior to sending or post receiving. Like:
1) I need to find an address for a specific name. Display the identities found so that I can pick the right one. In this case the identities really need to be in-person proofed by someone else.
2) I have an e-mail address, what is the available information on that identity (aka the certificate content, but could also be the LDAP content).
The miss-use-cases are just as important to PREVENT
1) An attacker wants to have the content of your message
2) An attacker wants you to receive and accept their message
Conclusion
In all these cases the knowledge of any in-person proofing by the CA is important, but it is an attribute carried in the certificate. This attribute is used in workflows and/or as part of the ‘authorization’ decision. This is the critical step, as authorization decision is moving from “Trustable” to “Trusted”. Trusted is a decision of Authorization. Trusted is not a decision of Identification or even Authentication.
I do understand that some Healthcare Providers want to outsource this to their Full-Service-HISP. There are a vast number of them that really should do this. Trust is hard, and sometimes it is the right thing to outsource hard things. I outsource non-routine healthcare assessments to my Healthcare Provider, because that is hard. But I expect them to provide me choices and I ultimately make the decision based on their professional assessment. Sometimes there is only one choice and it gets done with little discussion (you need a lab test), sometimes the choices are vast.
I think the challenge is not in Identity-Proofing; it is in supporting reasonable decisions on a trustable list of CA certificates. NOT ‘the trusted list’, a ‘trustable list’
updated: to fix a sentence regarding secureDNS and secure LDAP.
The Deep Dive
This brings up the question of what are the important use-cases, and exactly what is the RISK that the certificate system in Direct needs to solve.
This is easiest seen if I start with the controversial use-case. The one most likely to be used by General Provider with their Patients. In this case the GP knows who their patients are, at least well enough to help them get healthy. Thus the GP really has done a Proofing step. In this case the Patient has a conversation with their GP and tells them their PHR e-mail address (e.g. HealthVault account e-mail address). The GP puts this e-mail address into his Direct solution for sending secure e-mail and documents. Note that I will show that the case where the sender didn’t do in-person-proofing is a super-set with a little additional requirements, and no less requirements.
The Risk
In this case when the GP sends something he needs the infrastructure to make sure that the certificate that is discovered is truly the certificate for the e-mail address given. That is a malicious individual, the attacker, wants the content of the Direct message. In this case the attacker needs to get the sender to send the secured e-mail with the encrypted envelop targeting a certificate that the malicious individual has the private key for. The Direct project uses really good security, both signed and encrypted e-mail using really good crypto algorithms. Thus the attacker must become what is seen as the intended recipient. Let me explain:
In non-secure e-mail (normal e-mail) this is rather easy, you just attack the DNS system and return faster your mail-server address as the location for the e-mail to be sent (the DNS - MX Record Lookup, or DNS server address lookup). If you do this with a S/MIME protected message you will get nothing but the encrypted message. If poor encryption is used, you can beat on that e-mail message until you crack it, but if we choose good algorithms like then this effort takes too long to be useful, and Direct specified strong algorithms. So we have this risk handled, through picking an end-to-end solution like S/MIME and good algorithms. And we leave the transport open for best flexibility, flexibility that is really useful for redundancy and robustness. As strange as it might seem, we simply live with this risk because it has a very-low impact. See How to apply Risk Assessment to get your Security and Privacy and Security requirements
The attacker wants access to the content of the secure e-mail, so they need to attack DNS again, but this time get it to return fast with a certificate that the attacker wants. If the sender system doesn’t check this certificate, but just sends to it; then the attacker is back at non-secure e-mail and has full use of the content sent. The Direct Project requires checking of the certificate that I will outline below, so compliance to the specification is important. Note that the certificate checking is mostly what all systems do for checking certificates.
There are secureDNS and secure LDAP solutions. They are not recommend because there is a very robust system to validate Certificates including discovering the assigning CA and checked for revocation. This system is preferable because it allows for far more flexible DNS and LDAP configurations; but mostly because it is one system that works equally well regardless of how you got the certificate (which means we could add new ways to get the certificate and it would still work, such as from a previous secure e-mail conversation). Note that if you use SecureDNS or secureLDAP there is little overhead, and if it works for you then there is no harm in doing the verification multiple ways. SecureDNS might just mature enough for us to rely on it, but it is simply not needed for the Direct need.
The sending infrastructure will ask the whole worlds DNS and the whole worlds LDAP for a certificate that claims to be ‘the’ certificate for the e-mail address that the patient provided. The GP is very confident that the e-mail address is right, and besides if it wrong then it is the Patient that deceived. But back to the problem. Let’s just say that 10 certificates claim to be ‘the one’, how does the infrastructure know which one to choose. It can’t be the FIRST certificate to come back, as that just means the attacker must just get their response back first, which isn’t hard. The sending infrastructure must thus not stop looking after receiving one response. It must wait a reasonable time for all potential candidates. During this time it can be ‘validating’ the certificates that have arrived.
So the RISK is exposure of content because the certificate you use to encrypt the e-mail is not the legitimate certificate for the e-mail address you want to send to. Note that this risk was known, and is THE risk to be resolved in the original Direct Project threat assessment.
The solution
I will repeat that in this case we only want to make sure that the certificate is the legitimate certificate for the e-mail address we have. In this case we don’t care about proofing. Indeed even if I had started with a use-case where the certificate needed to include proof of high-assurance proofing, all these steps are needed. Thus the main problem is NOT the assurance, it is the legitimacy.
First we must do cryptographic checking. Doing this on a certificate is easy, it is available in many toolkits and operating systems: Is the cert signed, is the expiration time still valid, is the chain to a trusted CA, is the cert not revoked. The cryptographic part is well known, yes one must be using mature algorithms and key lengths. Please don’t allow RSA 1024 or MD5 hash. Look to FIPS 140-2 for guidance, and bump it up. In this case future-proofing is cheap and certificates tend to be around a long time. What is not solved yet is the list of CA certificates that I trust to issue legitimate certificates. Let me defer discussing where this magic-list comes from until later.
If I have cryptographically tested the certificate, then I know that the content of the certificate is valid. The next couple of steps will be looking at the attributes inside the certificate and making more decisions. The following checks are important for all use-cases. I want to make sure the certificate claims to be for the e-mail address that I want to send to. This is needed by all sending use-cases. There are some short-cuts going on here, I don’t like them but we just need to deal with these short-cuts in the future.
Further, I need the root CA to be claiming that it issues certificates for the domain of e-mail addresses. This is likely just to the domain-name part of the e-mail address. I will show later in the magic-list why this is important. A certificate issued in violation of the CA policy needs to be reported as a potential indicator of a compromised CA. The checking of the e-mail address and the CA domain issuing is important as it stops false certificates.
I also need to make sure that the certificate is one issued for the purposes of S/MIME e-mail encryption. This is another reason why I might get multiple legitimate certificates. Some of the certificates might be constrained for just digital-signature.
Identity Proofing:
It is only here that one cares if the identity in the certificate claims to be of a specific Identity Proofing Assurance level. And the use of this knowledge is not security-layer use, it is application logic. If the sender has done in-person proofing; then any identity assurance level is fine. It is only if the use-case has not done an in-person proofing that the infrastructure should utilize the Certificate claim of Identity Proofing level. Yup, this all I need say.
Magic-list
Now comes the hard part, how do you get the magic list of trusted CA certificates? You start from some magic list of trustable CA certificates and make some local decisions. You might outsource this local decision making to your Full-Service-HISP, but it better be transparent between you and your HISP.
Identifying the list of trustable CA Certificates is a hard problem, and done wrong can be really wrong. See the mess that has happened in the Web-Browser world where they took a specific short-cut that eventually caught up to them. A short-cut that seemed right at the time, and I would agree that it was the right short-cut AT THE TIME. The problem is that once you have taken a short-cut, it becomes the de facto pathway and no one ever challenges that short-cut. This one should have been challenged and replaced in the past 5 years. So, if we take a short-cut; please put an expiration on that short-cut.
Identifying the list of trustable CA Certificates is the space that DirectTrust is trying to fill. How does someone choose the list of CA certificates that they are going to trust? I will repeat this is a hard problem. Often with hard problems a group will end up with blinders on. I think this group is so worried about “Identity Proofing” that they can’t see that this is not the most important thing. It is important to know, but not important to constrain. Meaning for any certificate, the user of that certificate needs to be able to determine the identity-assurance-level.
So there needs to be a managed list of ‘trustable CA certificates’ – Not ‘Trusted’, but ‘Trustable’. The actual trust decision is not a central authority decision. The trust decision is really the sender and receiver of the message. All the infrastructure need only support that trust. So the qualities I can think of for this “Trustable CA Certificate” list:
a) Whatever CA Certificates are listed must be clearly identified as to WHY were they selected. The list of reasons why one might trust, is a growing list: Federal-PKI-Bridge, etc.
b) What certificate policy is used by that CA. This includes the Assurance level the CA issues at.
c) What e-mail address space are issued under this. E-mail addresses are made up of two parts separated by the “@” character. The first part is the unique identifier with in an assigning domain, the second part is the unique identifier of the domain. A CA really needs to be aligned with e-mail assigning domain. The reason why this is important is that the CA certificate is what is listed in the magic-list, and therefore it needs to be transparent about what should be considered legitimate identities issued by it. This is simply an indicator that this CA is the assigning-authority for identities issued in that domain.
d) When is this recommendation of this CA certificate going to expire. I would recommend it be short. Given that there is really not a good solution to revocation-checking for Root CA Certificates – although it can be done.
e) ….I am sure there are more and I expect those working on this are making good progress…
Given this list, does beg a question of how is this magic list of trustable CA certificates to be distributed? I can only point at the browser market, and hope they come up with a solution. They need automated distribution more than we do. We have a very manageable number of potential trustable CA certificates today. I recommend we wait on the scalability problem for a while. Yes this is the same short-cut the browser market chose. Yes a short-cut that needs an expiration.
More use-cases
This article is already long, but these other use-cases are important too:
I have already covered
1) I need to send a secure message to an e-mail address where I have already in-person proofed the identity.
2) I need to send a secure message to an e-mail address and I need there to be technical proof that the identity has been in-person proofed by someone trustable.
3) I have received a secure message, is it from someone with an identity that has a high assurance identity?
4) I have received a secure message, is it from someone I have already in-person proofed?
There are also some workflow use-cases that are prior to sending or post receiving. Like:
1) I need to find an address for a specific name. Display the identities found so that I can pick the right one. In this case the identities really need to be in-person proofed by someone else.
2) I have an e-mail address, what is the available information on that identity (aka the certificate content, but could also be the LDAP content).
The miss-use-cases are just as important to PREVENT
1) An attacker wants to have the content of your message
2) An attacker wants you to receive and accept their message
Conclusion
In all these cases the knowledge of any in-person proofing by the CA is important, but it is an attribute carried in the certificate. This attribute is used in workflows and/or as part of the ‘authorization’ decision. This is the critical step, as authorization decision is moving from “Trustable” to “Trusted”. Trusted is a decision of Authorization. Trusted is not a decision of Identification or even Authentication.
I do understand that some Healthcare Providers want to outsource this to their Full-Service-HISP. There are a vast number of them that really should do this. Trust is hard, and sometimes it is the right thing to outsource hard things. I outsource non-routine healthcare assessments to my Healthcare Provider, because that is hard. But I expect them to provide me choices and I ultimately make the decision based on their professional assessment. Sometimes there is only one choice and it gets done with little discussion (you need a lab test), sometimes the choices are vast.
I think the challenge is not in Identity-Proofing; it is in supporting reasonable decisions on a trustable list of CA certificates. NOT ‘the trusted list’, a ‘trustable list’
updated: to fix a sentence regarding secureDNS and secure LDAP.
Subscribe to:
Posts (Atom)