Friday, August 31, 2012

Identity - - Proofing

There is much discussion lately about "Identity Proofing". Much of the discussion is around the Direct Project, and the identities to be used with Direct. There is a massive effort to create highly assured identities for Providers, while at the same time there is a real concern that doing this high assurance identity proofing for patients is not necessary and for some not desired. Arien has a fantastically readable blog article on this topic Identity Assurance for Patients using Direct. I like the model he describes, and have spoken about the same model on my blog multiple times, just not as well written as Arien.

Arien proposes that for Patients the identity proofing is done by the Healthcare Provider on a one-by-one relationship basis, that is that the Healthcare Provider does their-own proofing and bind what ever identity the patient hands them with the identity they have. He is getting an effective in-person proofing. He is just getting one that the identity-assurance is not transferable to others that use the same identity. This is not his goal, and in my view shouldn't be anyone's goal with these use-cases. I am totally on-board with this solution. This is exactly what happens with Patient Id today, so why should Direct based e-mail addresses be any different. This is also similar to how secure-email has been done prior to Direct choosing that technology Trusting e-Mail.

This identity that the patient provides doesn't need to be otherwise technically bound to the patient as a human, thus an anonymous identity. The anonymous identity is anonymous to those that the patient doesn't need to be fully identified with. For example their PHR (e.g. HealthVault) really doesn't need to know the human attached to the account. They are just making sure that it is always the same human that interacts with it. This is the same thing that Google+, Facebook, and Twitter. Even with Google+ efforts to force 'real identities', they really don't know who the individual is. There are few cases where these types of services really need to know who the user is. They just want to make sure that you are always the same individual. In fact many people I know have multiple Google+, Facebook, and Twitter accounts. One set for work, one for home.

But this does not mean that a process like Arien points out can't be used on a case-by-case basis to bind the real identity that the healthcare provider knows with the anonymous identity that the patient is providing. An anonymous identity is simply one that doesn't it-self describe who the individual is, it is likely a bit of random letters-digits with no demographics backing it. This can be done as a base identity, such as the DonaldDuck example, which is a form of Voluntary-Patient-Identity. Or it can be done using the identity technology, such as SAML has the ability to issue pseudonym-identities that only the IDP knows the linkage. The important part is that the identity is trusted one-by-one because of a personal relationship, not because of the technology or the chaining-trust. This is a form of in-person-proofing, just a form that is not transferable.

This can go to extremes too, where the patient uses different pseudonyms with different healthcare providers. Thus forbidding the two healthcare providers from determining that they are treating the same patient. A very reasonable and smart thing to do if you are really wanting to do this. However, it likely would get discovered as Healthcare Providers do tend to identity proof you, but would work for procedures where you pay cash. I would be concerned at how this system can be abused for cases like drug-seeking behavior. That is the patient is hiding his prescriptions, while leveraging the same healthcare problem. Yes, as a security expert I must think about the abuse vectors.

I will note that ultimately a PHR service does receive your identity by way of the medical data that it receives. HealthVault knows well how to bust apart a CDA, DICOM, CCR, etc. Thus although you might not tell them exactly who you are, they know it in the end.

This system should be just as usable for Providers as it is for Patients. There is even evidence that Providers don't want to communicate with identities that they don't personally know. Early HIE efforts in northern CA had trouble getting Providers to share until they realized this, so they organized social events where a hand shake could happen and faces could be seen and ultimately a business card exchanged. Sharing took off (I so wish I had a reference to this, but remember it from a presentation). No technical system could do what human relationships do. Identity-Providers (PKI, OpenID/OAuth, and SAML) are all good solutions when there is a trusted-third-party; all equally good technology.

This is all enabled by delaying the proofing operation to the human one-on-one relationship. 

The problem I have with anonymous identities is when they are used in a way that is not backed by some in-person binding. When someone trusts an identity without this in-person relationship. When they trust it because someone else said it was good enough. Some would claim that this is the 'scalability' problem with this model, I disagree. It is only a problem when people start trusting the technology rather than the human relationship. But, yes that does have problems with scale. And it is more likely that provider-to-provider is going to be non-human-relationship based, where as provider-to-patient is more likely to have a hands-on relationship.

When the in-person proofing stops happening, usually because someone presumed it already was done, this allows a malicious individual to insert themselves into that break in the system. That malicious individual can claim to be a patient, and cause PHI to be disclosed. There are well known cryptographic methods to support trust, building a hack just for healthcare is dangerous.

My point is is that we must always think through the miss-usecases with more vigor as we know that the malicious use-cases are large, they have time to think, they have motivation, etc.

Updated:
Risk: The identity you intend to send to gets redirected to somewhere else. With Direct this is easily done by attacking the DNS pathway. Either through returning DNS results faster, or blocking real results. Return falsified MX record lookup, so  that my mail-service connects to the wrong target mail-service; and also return falsified CERT records, that are falsified certificates (requiring you to make a cert that chains to something my system trusts). The most important vector is the cert, hence the desire to have a small set of CA roots that are highly reliable, meaning they are trustworthy to only create legitimate certs.  Also requires that we use signing pathways that don't have technical vulnerabilities (such as MD5).  This is the reason most are heading toward a CA centric world. My preferred alternative is to not do certificate distribution using DNS, but  rather through a one-on-one e-mail conversation as I describe Trusting e-Mail

References:

Wednesday, August 29, 2012

Minimal Metadata

This is related to Meaningful Use Stage 2, at least that is my theory in Karen's Cross or just Minimal Metadata. So, what is ‘minimal metadata’? I have discussed Healthcare Metadata. This discussion includes background on why metadata exists in the first place, and breaks down the IHE metadata into metadata purposes. This maximum viewpoint of Metadata for Healthcare Sharing is far more focused on supporting a Publication and Discovery model of a Longitudinal Health Information Exchange. Note that even with XDS much of this metadata is not mandatory, but is defined so that those that need to communicate the concept do it in an interoperable way.

The scope of Metadata to support a short-term PUSH transactions is a bit more simple. Further even in a short-term PUSH transaction there can be a minimalistic viewpoint, actually 2 flavors of minimum. When IHE re-used the XDS transactions and metadata model for PUSH like transactions in XDM and XDR; there was not a critical review of the minimal metadata. For the most part we published the very same requirements and figured Public Comment or Trial Implementation would push-back if it was unreasonable. There was very little push-back. But in hindsight IHE needs to recognize that the audience at the time was mostly those with a XDS viewpoint, so there was not a critical view.

Along came the Direct project. Where there was a totally new audience, and as Wes is known for saying “Change the consensus group, change the consensus”. So, the Direct Project looked at the metadata requirements for IHE XDR, and downgraded some from mandatory to required-if-known. This is to some a huge change, but I really don’t see it as a big change. Most who would have seen a mandatory field and not having an answer would have either already sent it blank, or would have put static filler there. But, let’s look at the details.

What is Direct Minimal Metadata?You can get the full details from the XDR and XDM for Direct Messaging specification – Section 6.0 Here is the short version of the 13 metadata items Downgraded from Mandatory (R) to Required-If-Known (R2). The main reason is that this information is really not all that useful to a PUSH, it is more useful to a Query. The second reason is that this information is not typically found in an ambulatory or very low end clinic, or at least not in a highly structured and coded way.

Direct Downgraded from Mandatory (R) to Required-If-Known (R2)
  • Document Entry Metadata 
    • classCode 
    • confidentialityCode 
    • creationTime 
    • formatCode 
    • healthcareFacilityTypeCode 
    • languageCode 
    • patientId 
    • practiceSettingCode 
    • sourcePatientId 
    • typeCode 
  • Submission Set Metadata 
    • author 
    • contentTypeCode 
    • patientId

IHE Liked it!!!
IHE looked at this evaluation and ACCEPTED ALL and downgraded even MORE
  • Document Entry Metadata 
    • author 
  • Folder 
    • codeList 
    • patientId 
Actually IHE not just accepted these but for XDM it made this the base rule. For XDR there is a special Actor “Metadata-Limited Document Source” that is allowed to use these relaxed rules. This was done to allow the Recipient to know which validation rules it can enforce, and to allow deployment specifications to be specific.

So why bother with metadata? The reason is that if the sender DOES know this information, it is useful to the recipient. Therefore if the sender has it, then it should be communicated (R2); and if it is useful to communicate that it should be communicated in an interoperable way. Given that these are just XML attributes, being empty or full doesn’t much matter. Remember the Transports are content agnostic, capable of carrying a CDA just as well as a PDF. This enables historic information, current information, and future formats we have not yet thought of. Content agnostic transport is a powerful thing, as is a reasonable set of metadata.

Additional Direct Special ConsiderationsIn order to code Direct Address elements in the metadata, extensions were defined for the author and intendedRecipient attributes. Tricks (Profiles) on how to make this happen in a reliable way. Again to assure best interoperability.

ConclusionIf ONC only wanted to specify that Minimal Metadata should be considered acceptable, they could have done that using the base XDM profile without modifications, which happen automatically to the already "Direct Project Applicability Statement" otherwise know as Transport (a). Meaning they already get minimal metadata with the base Direct specification, as the base XDM specification is already built into Direct.

And for XDR they could have called upon the “Metadata-Limited Document Source” actor. 

Even if they don’t like calling upon IHE specifications they could have just called on section 6 of the XDR and XDM for Direct Messaging specification
I hope that this is what Transport (b) means

Monday, August 27, 2012

Karen's Cross or just Minimal Metadata

I made the Karen's Cross observation about the NPRM, but I seemed to have failed to make it clear. The transport identified in Meaningful Use 2 as (b) is NOT a transport, it is a functional specification for a service that converts Direct to/from XDR. It is a service specification. It is NOT a transport specification. Both sides of this service specification are fully specified. On one side is Direct, on the other side is XDR. What makes this more difficult to understand is the (a)+(b) or (b)+(c) math... 

The alternate view is that ONC just means the minimal metadata specification. I am hopeful this is the right read.

§ 170.202 Transport Standards. (b) ONC XDR and XDM for Direct Messaging Specification (incorporated by reference in § 170.299).
The specification is properly identified, you can find it here, it is actually this, and comes from XDR and XDM for Direct Messaging.

If they meant only to require the metadata portions as mandatory, they should have said that. Actually these metadata requirements have been incorporated into IHE XDR as a specific option. So they could have identified this option.  But they did not, they said the WHOLE SPECIFICATION.
Karen's Cross

The (b) transport is pointing toward a specification that was written as part of “The Direct Project”. This specification shows how interoperability can be achieved when one system is using purely the secure e-mail of “The Direct Project” and another system is using IHE-XDR. Both specifications are PUSH, both support the same high-level goals. They are simply different transport/session level encoding. This specification shows the relationship between the e-mail transport and the IHE-XDR + SOAP transport. For example it explains how an XDR submission set with multiple documents can be converted into an XDM submission set with multiple documents, the result zipped according to the XDM option for “secure e-mail”, and this ZIP file placed into a secure e-mail message following “The Direct Project”.

The following table shows the cases of conversion that SHALL be performed.

Receivers

RFC5322 + MIMERFC 5322 + XDMSOAP + XDR
SendersRFC 5322 + MIMENo ConversionNo conversion
- receiver expected to be able
to use non-XDM format
- Transport Conversion
- Metadata is created
RFC 5322 + XDMNo Conversion
- receiver is expected to be able
to handle XDM package
No conversion- Transport conversion
- metadata simply transformed
SOAP + XDR- Transport conversion
- metadata is simply transformed
- delivered as XDM package
- Transport conversion
- metadata is simply transformed
- delivered as XDM package
No conversion
This is a proxy or bridging specification. It isn't a specification that EHR technology would implement. It is the bridging technology, a proxy service, that allows for mostly-seamless interaction regardless of if both sending and receiving support the very same transport. It is a specification that shows how two radically different transports can be made to work by a proxy system. This proxy or bridging service would typically be running at the edge of one type of network as a transparent gateway to the other network. So there would be only a few of these proxy/bridge systems.

Why did we need Karen's Cross?
Back when the Direct Project was working hard on defining the specification and other parts around it. There was a recognition that those that can talk Direct, and those that can talk XDR can talk to each other if we work out exactly how to convert from one to the other. 

This was graphically shown on the white board by Karen, and thus became known as Karen's Cross. The diagram did get cleaned up and is shown at the above, pulled from the Direct project wiki article on the Intersection with Exchange . The top of the Cross shows two systems communicating using the Direct specification, the bottom shows two systems communicating using the NwHIN-Exchange push transport (which is XDR). If one stays totally on top, or totally on the bottom there is no problem. But if you want to cross over then you need the RED arrows. It is these RED arrows that make up the “XDR and XDM for Direct Messaging”. 



What Karen's Cross shows is that the end systems don't need to know what the technology of the other system is, and that the conversion is done using automation transparently. In Deployment Model terms, here is the diagram for the RED arrow from the top left to the bottom Right. It shows how this system converts a Direct e-mail message into an XDR message delivered over the NwHIN-Exchange.



The RED arrow from the bottom left to the top right is shown here.


There is far more description done at the Direct wiki Deployment Models page. I encourage even a quick look at this. This is simply further proof of the magic of the use of Standards, the little blue box.


Conclusion
It is very possible that all that ONC wanted to pull from this specification was the minimal metadata. This is a reasonable thing to pull from the specification. This minimal metadata recognized that some metadata that IHE had originally identified as required simply isn't always going to be available.  However if this is what they wanted to do they should have said so. IHE has adopted this minimal metadata directly into XDR specification and XDM specification -->  Support for Metadata - Limited Document Sources. So there is no need to use such specification pointing gymnastics.  I think I am going to assume that all they intended was the minimal metadata.

The whole transports could have been said far more simply using IHE profiles. There is NO technical difference. It is so frustrating that all this specification complexity is because there is a desire somewhere to keep IHE profiles out.

Thursday, August 23, 2012

Effective Standards Evaluation

Guest blog by Karen Witting, co-chair of the IHE ITI Planning Committee 

The NwHIN Power Team has been tasked with creating metrics through which standards are assessed for Maturity and Adoptability.  Their Final Recommendation was presented to the HIT Standards Committee on 8/15/2012 and was met with approval from members of the committee.

The criteria they recommend is very detailed, requesting high/medium/low ratings on many different aspects of a standard.  The details of these criteria were also provided to the HIT Standards Committee.

The Power team has done a very complete job, listing, for each metric, attributes that would qualify the standard to belong in each of the three ratings.  Some attributes are very specific, laying out concrete metrics for things like coordinated implementations and age of oldest known conforming implementation.     Others seem concrete on the surface but have hidden challenges, like “number of organizations supporting authorship and/or review”.  The difference between number of authors and number of reviewers is pretty significant and it seems confusing to mix that together in one metric.  But most are based on the perception of the reader, things like “no references”, “few references” and “numerous references” or “few user s”, limited user” and “active user”.   

Effective Evaluation vs just Evaluation
While I might disagree with the clarity and usefulness of some of the metrics, my larger concerns are regarding the validity and level of detail called for by the metrics.  For example, under Maturity, the Stability metric lists number of releases and problem history.  It suggests that standards with more releases are less stable than those with fewer releases.  But in reality the number of releases is only tangentially related to the stability.  It is the significance of the change between releases that is much more important.  And only those very familiar with the standard can assess the significance of the change.  The same can be said regarding problem history.  As such, these “metrics” are extremely subjective, especially when in the hands of someone who is not intimately involved in the standard’s development.  Simple lack of knowledge leads to use of anecdotal evidence which results in as subjective a result as is would be achieved without the detailed criteria. 

I observed this challenge as I listened to the NwHIN Power Team assess InfoButton as a test of their criteria.  Those doing the assessment were given only the InfoButton specification and Implementation Guide and asked to assess the standard on criteria that go way beyond anything a specification or implementation guide is designed to address.  No effort was made to gather evidence regarding implementation and deployment of the standard.  In fact, a paper assessing implementation of InfoButton (http://www.ncbi.nlm.nih.gov/pubmed/22226933) would have provided very useful data to be considered when making the assessment.  During the discussion I heard statements such as “I don’t know anybody who has implemented this” when, in fact, 17 implementing organizations are listed in the paper spanning Health IT vendor, Healthcare organization and knowledge publisher. 

My understanding of the purpose of the NwHIN Power Teams criteria is to improve the subjective process of assessing standards by creating metrics which will enable the assessment to be done in a less subjective manner.  While the criteria developed by the NwHIN Power Team suggests that much more detailed knowledge goes into making the assessment I’m not convinced that the criteria alone will improve the subjective nature of assessing standards.  Having a detailed set of criteria is fine and probably helpful for those not familiar with standards.  But equally important, if not more important, is the need for the gathering of data on which the assessment can be based.  Through my involvement in IHE I have seen this done many times, as for each new profile we do a similar type of standards assessment.  We always assess the standards we select for maturity and adoptability.  But we do this by gathering as much data as can be reasonably gathered through web searches, queries to supporting standards bodies or requests sent through peer networks.

In the case of the assessment of Infobutton my concerns may be unfounded as this was more a test case than a true assessment.  But, this same committee did do some real assessments on NwHIN standards and I saw the same approach, very little time spent gathering data about the standard and a lot of time talking amongst people who often do not have any first-hand experience with the standard.  This is not a good process for assessing standards.

Conclusion:
My wish is that any group which uses the NwHIN power team’s criteria ensure that, prior to doing any assessment, a thorough and complete data gathering task is completed and all reasonable data regarding the standard, its implementation and deployment are available.  All assessments are subjective, there is no avoiding that, but the more data used in making the assessment the less subjective it will be.  Having detailed criteria is only useful if the group also invests in gathering all relevant data.

IBM disclaimer:  "This postings is Karen's personal opinion and don't necessarily represent IBM's positions, strategies or opinions."

Sunday, August 12, 2012

The Emperor has no clothes - De-Identification and User Provisioning

I am disturbed at discussions lately in both the De-Identification and User Provisioning space. Yes, there is a common thread between them. It is not a technical thread, but a social thread. Identity is HARD. Proving identity is hard, and keeping identity secret is hard. Yes, this math works out. I think we have a case where people really really want to believe that De-Identification and User Provisioning are easy, or could be made easy. Well, I am not willing to say that your clothes are pretty. Identity is a hard business, and has been well before Information Technology or the Internet.


I will start with the biggest story, about “Mat”. So big it is on CNN and elsewhere. This is a very interesting story to dig deep into. Actually it doesn’t even take that much digging to get deep. This because Mat has done such a good job of explaining it himself. Please look at the stories for the details. The summary is that attackers who just wanted his Twitter account leveraged multiple social-engineering vulnerabilities in organizations that should know better. I am even willing to assert that these organizations do know better. They are forced to be as liberal with their policies because – Their customers don’t like being challenged. Meaning they could have stronger policies, procedures, and even technology. It isn’t a cost thing either, it is a case of convenience. AND as long as the customer thinks that what they are providing is good enough, then these organizations must believe that this is good enough.

Closer to healthcare is the work going on in the Direct Project, specifically the group trying to create identities for use with the Direct Project protocol (Secure e-mail using S/MIME). The DirectTrust.org group is really doing good work, but they are constantly pushed to make it even easier. There comes a point when user provisioning becomes so easy that a dog can get an identity. I am not saying that DirectTrust is at this point. I actually think they are working hard to keep from that failure. I however think that any efforts to do user provisioning without in-person-proofing is not going to work, especially for access to healthcare information. And, this is just the user provisioning. As the Mat case shows these identities are only as good as the re-set system. Or in Certificates, the revocation system and renewal. These have not been the focus yet, but need to be. This stuff is hard, and anything less will be thwarted.

Moving over to De-Identification. Here is a topic that is trying to hide identity, and we see just how hard that is too. The latest news actually indicates that de-identification should actually be seen as stronger than it is given credit for. When done right, this is indeed true. When you put effort into your de-identification method you can have on that is truly well done. I have covered this in De-Identification is highly contextual, and I am involved in ISO standards on Pseudonymization and De-Identification, as well as IHE handbook we are writing to guide profile writers. There is efforts to get the USA government (ONC) to define a new De-Identification specification, an effort to get government endorsement of shortcuts. But this gets to my point, that people want it to be easy when it is actually hard. It is hard and that is good, shortcuts will result only in failures.

There is a very interesting piece that ties these topics together without intending to. In Kim Cameron’s blog he talks about a really cool use of social graph to do user provisioning. I assert that this is not what we want to do with this information, at least not for moderate or high security like we need in Healthcare. But his point is that there is so much social information on the internet, we must leverage it when doing user provisioning. Unfortunately as the Mat case indicates, this can cut both ways. The attackers can invent social graph and thus invent new identities or radically change an identity.

The root problem is that proving that an individual is who they say they are is HARD. Even if you are that individual. It is hard to prove that you are indeed who you say you are. Identity is not something that nature gives us, identity is something we humans have added. Identity is NOT NATURAL. This doesn’t mean that identity is a bad thing, but it does mean that we must constantly be testing identity assertions. When we are in social situations we are always observing the people we know, constantly testing that they seem to be that person we know. We are also observing those we have been introduced to, learning how to re-verify (authenticate) them perchance in the future. We don’t just use one introduction, we also ask our friends if they also know this person. Multiple assertions are often used, not necessarily strong assertions or fully trusted.

We need to take the approach that Identity is hard, and deal with that fact. We should NOT try to simplify the user provisioning steps, or to make easy password reset, or to make de-identification simple. These are hard things and they NEED to be hard. I have hope for the NIST Steering Group for Identity Ecosystem (NSTIC). It is good to see a few Healthcare representatives on this group. I don’t know them, but would welcome a dialog.

Thursday, August 2, 2012

HL7 WGM - Introduction to Security and Privacy

The HL7 Security Workgroup is hosting a Free Educational Session at the Baltimore HL7 Working Group Meeting - September 12th, afternoon. See Page 13 in the HL7 Workgroup Meeting Brochure
This session will focus on how to apply security and privacy to the health IT standards. It will cover the basics of security and privacy using real-world examples. The session will explain how each phase of design needs to consider risks to security and privacy to best design security and privacy in; and mechanisms for flowing risks down to the next phase of design. In addition, it will cover the security and privacy relevant standards that HL7 has to offer including: Role-Based-Access-Control Permissions, Security/Privacy ontology, ConfidentialityCode, CDA Consent Directive, Access Control Service, Audit Control Service, and others. These standards and services will be explained in the context of providing a secure and privacy protecting health IT environment.
As a FREE Educational Session this takes place in the Security Workgroup meeting room Q3-Q4 Wednesday. I invite all members of the security workgroup to attend, engage in discussion, and offer to lead topics. I am prepared to do this completely on my own, but really really enjoy sharing the spotlight.

What I have planned is for the first Quarter (Q3 Wednesday) to cover our already prepared Security Risk Assessment Cookbook Tutorial. The focus here is on fundamentals of security and privacy risk assessment as a means to determine realistic requirements that mitigate risks in a complete and appropriate way. This will be accelerated as it is originally intended to be twice this long. It might get compressed even more if we uncover and create more compelling second half work.

For the second half of the afternoon (Q4 Wednesday) I would like to cover the other security and privacy components that exist in HL7. Here is where I really hope to leverage the expertise of the other Security Workgroup members.
  • HL7 Value Sets using Code System Confidentiality (2.16.840.1.113883.5.25) -- This vocabulary is used in the confidentialityCode metadata attribute to identify the data object sensitivity and confidentiality classification. This enables both segmentation of especially sensitive topics and also Role-Based-Access-Control that protects objects for both security and privacy
  • HL7 Version 3; Composite Privacy Consent Directive (CDA), DSTU Release 2 - This CDA document object captures the patient privacy preferences, authorizations, and consents. This document is used as evidence of a patient consent ceremony as well as triggers privacy policy engines to enforce the patient privacy.
  • Role-Based Access Control Permission Catalog (RBAC), Release 2 - This vocabulary enables communication of users permissions in an interoperable way. This vocabulary can be used at a multitude of points in the Privacy and Security system.
  • Privacy, Access and Security Services (PASS)
    • Access Control Service – This is a service being defined for support of access control decisions and enforcement
    • Healthcare Audit Services Release 1.0 -- This service specification is available and enables security audit log recording. There are also service endpoints to enable different security and privacy audit analysis use-cases, including the creation of an accounting of disclosure.
  • EHR Functional Model, Release 1 -- The EHR functional model includes a comprehensive set of security and privacy functions. This catalog includes detailed system level requirements that are actionable and testable. Profiles of this functional model are available for many functional systems including an EMR and PHR.
  • HL7 Version 3 Standard: Transport Specification, MLLP, R2 -- The HL7 transport specifications include transport security (e.g. TLS)
  • and probably some of the currently under development things...
Most important is that this is a discussion. We will cover what ever material the audience needs to cover in the space of what HL7 has to offer in the realms of Privacy and Security.

Wednesday, August 1, 2012

Texas HIE Consent Management System Design

A fantastic white paper from Texas HIE on their Consent Management system design. This paper can be found with other information on Texas HIE.

The white paper purpose:
Texas Local HIEs need a method to determine if a patient has expressed an "opt-out" or "opt-in" preference for exchange between HIEs. Local HIEs also need a way to determine if a patients have acknowledged or exercised local fine-grained consent, authorization, or other policy. Automation of the access control decision is highly desirable and should be possible in most, if not all, cases using the envisioned services.
This architecture covers consent management. It is the systems design, and because it is based on existing Standards it is being accelerated:
Given that the THSA’s consent management approach is very similar, in terms of technology components, to its approach to other identified Phase II state shared services, it has elected to accelerate the deployment of state-level consent management services from year 7 to year 1 to ensure that the needs of the state’s Local HIEs are being met.
I am very happy to see that they are designing Privacy in from the start. I can't help but say, this is BIG. I guess it is true that they like to do things BIG in Texas. Note that it is actually a very solid first step only, plenty of room for advancement as standards advance; which they acknowledge at the end of the paper.

The Use-cases that are driving the work include
  • A patient wishes to express “all in” or “all out” consent for exchange between HIEs. 
  • A patient within a Local HIE Grant Program awardee expresses authorization for use of his/her data for research purposes. 
  • A responding gateway needs to determine if a patient has expressed a consent preference that would prevent access to a record. 
  • A patient wishes to change his/her consent preferences. 
  • A Direct Project-based exchange participant emails a THSA consent document to the state consent service.
The standards architecture follows what I outline and extends it with use of HL7 v2 and Direct:
As explained in more detail in the RFP Functional Requirements Grid, the state consent services leverage industry standard components and technologies.  IHE XDS.b is used to act as an index to and optional storage of consent documents.  IHE BPPC is used to represent the patient’s consent expression.  Direct is being used to receive consent documents from non-HIE/non-EHR sources.  PKI is used to ensure security of each web services end-point of the exchange, and to ensure the communications channel is encrypted.    The advantage of this leveraged approach is that the state consent services essentially re-use existing building blocks for a new purpose, allowing for substantial re-use of existing production products.
The really important part, which I have failed to explain well enough, is the creation of a vocabulary. This white paper explains how to create this vocabulary very well.

In order to enable automated access control decisions regarding the release of a patient’s medical record from one HIE to another HIE, it is necessary for each of those HIEs to understand each other’s policy. Specifically, it is necessary for each HIE to have the ability to decide if a given patient’s policy preference allows or disallows access. In order to accomplish this, there must be a published and uniform vocabulary of each consent policy type used within the state. Thus, Phase II state shared services includes a work stream to create such a statewide vocabulary of state, federal, and local HIE policy expressions.
Paraphrased from the whitepaper:
  1. create an inventory of each patient policy form for each Local HIE. Since most Local HIEs only have one to three policy forms, this is not anticipated to generate a large number of polices in the initial inventory. 
  2. add to this inventory all state and federal policies (of which there are also expected to be only a small number). 
  3. reconcile similar policy documents and create the smallest possible list of discrete policies. 
  4. This list will then be turned into a list of policy vocabulary identifiers (OIDs) that uniquely identifies each version of each policy in this statewide vocabulary. 
  5. Once the statewide policy vocabulary list has been created, only values from this list will be used by each patient’s consent acknowledgement IHE Basic Patient Privacy Consent (BPPC) document.
This resulting vocabulary allows for the automation of consent processing. My diagram from the BPPC webinar on the right with the Texas HIE description on the left.:
Each request for each patient’s medical information across HIE boundaries will result in the responding HIE gateway being required to search for, retrieve, and inspect any BPPC documents stored at the state policy servers. This allows the policy engines at each responding HIE to automatically determine if they will allow or deny the request based on their local policy and based on the knowledge of the policy the patient acknowledged.
They are also creating a toolkit for the participants to use:
The toolkit contents will be determined in coordination with Local HIEs and are expected to contain: 1) sample BPPC documents with and without optional scanned signature and digital signature components; 2) a list of the current policy OIDs along with references to the officially-maintained statewide list of policy OIDs; 3) use cases with diagrams; and 4) other documentation covering rules of use, scope, etc.
There is far more in the white paper. Including explaining how existing Consent Management systems that use HL7 v2 consent message (CON segment) are integrated. How interfaces are going to be created to allow for Consent Documents to be submitted using Direct for those not connected into XDS or XCA. How BPPC documents will be managed in a flexible but fully discoverable way. How they support wet-signatures, and digital signatures through the BPPC defined methods. etc...

The summary from the paper
The toolkit contents will be determined in coordination with Local HIEs and are expected to contain: 1) sample BPPC documents with and without optional scanned signature and digital signature components; 2) a list of the current policy OIDs along with references to the officially-maintained statewide list of policy OIDs; 3) use cases with diagrams; and 4) other documentation covering rules of use, scope, etc.
It is a very good document, although I do have some questions posted about the UML diagrams. Similar problems as I see with the S&I Framework - Data Segmentation for Privacy UML diagrams. 

So, add this to the great work by Connecticut RHIO

See Also: