Wednesday, August 24, 2016

Vectors through Consent to Control Big-Data Feeding frenzy

This is part of a series of articles on the various Privacy Consent mechanisms that are being developed in HL7, IHE, and HEART. This article will detail the various vectors that Patients desire to control. This discussion will not be on any of the specific solutions, but rather the overall requirement.

For some background, please see my prior article Controlling Big-Data feeding frenzy with Privacy Consent Authorization

First step is to recognize that Privacy Consent must enable the Patient to define Rules and Obligations. This is abstractly represented by their Policy -- My Policy -- which follows their Data.

Thus when someone or something tries to access the their Data; there is an authorization (AuthZ) check done. This authorization check assures that the Patient would be happy allowing their data to be used in the way that the someone or something is going to use their data. I am speaking abstractly, so no specific authentication, context, method, obligations, etc... Just that the Patient would be happy, not upset.

I have taken from my other article, the five elements. 
  1. The Patient -- Smiling
  2. The Patient's policy - "My Policy"
  3. The Patient's data - "My Data"
  4. The someone or something that wants to gain access
  5. The Authorization decision that is based on the request, and the patient's policy

The solutions being created by HL7, IHE, and HEART; are specifying these things in different ways. The differences are important, although the goals are the same.

In all models they look to address the Privacy control. They all are careful to make it clear that businesses holding data must and can still control their data according to their rules. So this is not a replacement or hindrance of Role-Based-Access-Control (RBAC), or other mechanisms that manage workflow centric authorization.

My Policy -- rules

The contents of "My Policy" are potentially very complex. I have covered the space back in 2011 in "Access Controls: Policies --> Attributes --> Implementation", an article still valid. The policy might be very simple, or might be very complex. The solutions might be able to manage the simple, and only some of the complex. 

The most simple is

NO!

That is right, the most simple is when the patient has either explicitly said "no" or has through inaction in an explicit-consent environment. 

My Policy - unknown

Right away I had to add a different situation. That is what is the Policy when the Patient has not yet expressed their choice or agreement or 'consent'.  The more hard part about this is that the policy that is in place when the Patient has not yet expressed consent is often driven by Law/Regulation, or Medical-Ethical standards.

There is a concept of "Implied Consent" that is related, but not exactly the same. Implied Consent is the policy that is in place when the Patient has taken an action to engage, but has not yet expressed an explicit consent. That is through their actions to go to a Doctor, they are implicitly consenting to some default form of "YES". This default form of "YES" might not be obvious. HIPAA requires that the Doctor post Notification of Privacy Practices. 

Medical-Ethics is the moniker where a treating clinician, under their Hippocratic oath, can determine that it is better for your safety, or the safety of others, that you be treated regardless of your explicit denial of authorization, or absence of authorization. Most medical-ethics will keep this 'invasion of privacy' to a minimum. There maybe a technical override used, often called Break-Glass.

A patient that doesn't even want break-glass used must be really insistent. Most "NO!" indications have many exceptions. Not just medical-ethics, but required government reporting, required medical records retention, etc. Many of these simply can't be wished away. This is why the "My Policy" space in all models expects only to control the data to the level of control given to the Patient. Which sadly is sometimes is very little, Otherwise known today as "Data Blocking".

My Policy - Vectors

I am going to use the word "Vectors" as each of the following are independent attributes and controls. Most of them can be combined in various ways. This is best modeled using mathamatics using "Vectors".

The category of things to control:
  • User - the context of the request for data
  • Application -- what is going to process the data
  • Resource -- the data
User
  • User Identity -- who is this specific user (e.g. userId, Provider-ID, nationally issued ID)
  • User Relationship -- what is the relationship between the user and the patient (e.g. care-team, mother, son, guardian, lawyer, law-enforcement)
  • User Role -- what is this user functionally doing that requires access to the data
  • User Organization -- what organization is the user working within
  • User Purpose - what is the user going to do with the data (e.g. Treatment, Payment, Public-Health, Research, Disclose)
  • User Location - where is the user, and thus where might the data go
  • User timeframe - when is the user access happening
Application
  • Application Identity - what application will get access to the data
  • Application Security-- what is the security of the application
  • Application timeframe - how and for how-long will the application persist the data
  • Application promise - what assurances can the application give on how it will treat the data
Data
  • Data Identity - unique identifier of the data
  • Folder Identity this data sits within
  • When was the data created
  • When was the data last updated
  • Who authored the data
  • Who verified the data
  • Where was the data authored
  • Availability, has the data been replaced or refuted
  • What kind of treating facility authored the data
  • What kind of care practice setting authored the data
  • Predecessor data that was used in the authoring of this data (e.g. Order)
  • Successor data that was created based on this data (e.g. Discharge Summary)
  • Relationships to other data (e.g. folder identifier)
  • Type of data object
  • Type of clinical content implied by the data (e.g. Pregnant, Cancer, Addict)
  • etc

My Policy Examples

As I indicated, the above set of Vectors are cross-cutting, and most policy statements will be made up of many vectors, and any "My policy" is made up of many policy statements.

Here are a few reasonable examples:
  1. Authorization the release of the documents authored by St Michael hospital to a named rehabilitation facility for the purpose of recovery.
  2. Authorize the release of the documents authored by St. Michael hospital, related to a specific surgery episode, to a post-surgery care-team.
  3. Authorize a treatment facility to gather all historic medical records from four other treating organizations.
  4. Authorize Dr Bob to have access to all records at St. Michael hospital except for those documents created during the fall of 1998.  
My goal is not to show examples for everything. I simply want to show how what seems clear in English text might be very hard to encode in a set of rules, and possibly very hard to enforce. Such as the last example

      5. Authorize my Parents access to all records except those related to drug abuse.

Especially hard since various tests might be given for drug abuse, but those same tests have many non-drug-abuse purposes. Many medications might be given to alleviate the effects of drug-abuse, but are also used for other conditions. Many locations (e.g. Betty Ford Clinic) are clear indications of a drug-abuse case, but not all locations are so obvious.

Also note that it is not always obvious who "Parents" are. Especially when a Parent might be a Clinician who might have other User accounts to use. 

Conclusion

What I have expressed is Vectors that Consent Policies (authorization policies) potentially need to be able to encode in a computable way. That is in a way that an Access Control decision can be made that would -- make the Patient happy, not upset.  This set of vectors is the set of vectors that I recall from the various usecases, I am happy to hear about others, I will gladly update this article. 

In HL7, IHE, and HEART; they are building parts of the system to do this. The differences are important, although the goals are the same. Enable the patient to be happy, not upset.

Other articles on Privacy Controls

Monday, August 22, 2016

Controlling Big-Data feeding frenzy with Privacy Consent Authorization

This is a start of a series of articles on the various Privacy Consent mechanisms that are being developed in HL7, IHE, and HEART. I will describe them quickly, but will go into more detail in later articles.

Big Data Feeding Frenzy

This is what most people think of when they hear about healthcare data in the context of "Big Data".  This diagram shows a very minimal "Privacy Consent". One that only controls if the Patients data is available or NOT.

Too often this is refereed to as OPT-IN, or OPT-OUT. This is a mischaracterization, but it is one that must be addressed.  This kind of configuration does happen. I am not saying it doesn't happen. Just that it shouldn't be confused with OPT-IN, or OPT-OUT.

I know that I will not succeed in defining OPT-IN or OPT-OUT. They have too much momentum behind poor definitions. I did try to add clarity in the FHIR Consent "General Model" section.

Consent Controlled Feeding

First step is to recognize that Privacy Consent must enable the Patient to define Rules and Obligations. This is abstractly represented by their Policy -- My Policy -- which follows their Data.

Thus when someone or something tries to access the Big Data; there is an authorization (AuthZ) check done. This authorization check assures that the Patient would be happy allowing their data to be used in the way that the someone or something is going to use their data. I am speaking abstractly, so no specific authentication, context, method, obligations, etc... Just that the Patient would be happy, not upset.

In HL7, IHE, and HEART; they are building parts of the system to do this. The differences are important, although the goals are the same.

In all models they look to address the Privacy control. They all are careful to make it clear that businesses holding data must and can still control their data according to their rules. So this is not a replacement or hindrance of Role-Based-Access-Control (RBAC), or other mechanisms that manage workflow centric authorization.

Documented Privacy Consent Act Controls the Feeding

The first model I will explain is one that both HL7 and IHE are approaching. In this model the goal is the same, to give the Patient control over how their data is used; to keep them happy. The method that they use is more focused on standardizing a record of the Ceremony of Consenting and the Rules that were part of that Consent in a form that is computable. Where computable means that an Authorization engine understands the rules and obligations, vs only a human.

This model does NOT define exactly how the recorded consent rules will be used in an Authorization (AuthZ) system, however it is completely reliant on an Authorization (AuthZ) (aka Access Control) decision engine is used. There are many, some based on standards such as XACML or other.

The model is focused on recording the facts of the Consent:
  1. Who - The patient
  2. What - The data - specific resources are listed, empty list means all data covered by the consent.
  3. Where - The domain and authority - what is the location boundary and authority boundary of this consent
  4. When - The issued or captured
  5. When - The timeframe for which the Consent applies
  6. How - The actions covered. (such as purposes of use that are covered)
  7. Whom - The recipient are grantees by the consent.
More later on the specifics of HL7 FHIR Consent, and on IHE BPPC and APPC

User Managed Access Controls the Feeding

The second model I will explain is the one being worked on by HEART. In this model the goal is the same, to give the Patient control over how their data is used; to keep them happy. The method tat they use is more focused on standardizing the Authorization Decision.

This model does not try to define the access control (authorization) rules. It expects that the "UMA Service" will have some User Interface that the Patient will use to explain their desires and wishes regarding how they want their data used, and how they don't want their data used. By placing this as a UI, the HEART group don't need to worry about all the specifics of the rules. As the rules can get richer and richer over-time based on User Experience. 

This model does need to be placed into the critical-path of each and every request for data. As that is when the "Authorization Decision" is made. There are optimizing methods, using timeouts and scopes, that means the Authorization Decision can be made once for a bulk of time and type of data. So it isn't as intrusive as the abstract model would imply. With a well-behaved and clean data-architecture one decision can be used for many transactions over many minutes. Unfortunately the data does need to be cleanly architected along lines that are usable by an Authorization Decision. This is where HEART is having trouble today. This problem will be solved, just not clear today.

Both UMA and Consent

It is possible to use both. There are a couple of ways to mix these two together. Not clear what is a good approach, vs a bad approach. Much more experience is needed.

1) Here is one where the Consent is captured, and UMA is used just to do the Authorization decision. Tis model look very much like the second one above, except that the "Red" AuthZ decisions have a defined standard, that being UMA. This model thus puts a standard in the Decision path, but doesn't do much else. 

2) Here is one where UMA is still providing the User Interface to capture the consent, but that consent is being persisted into a standard document form, like FHIR Consent. This model is far closer to the UMA like experience. It can take advantage of the UMA and OAuth specifications far better. It also meets documentation requirements using a standards based Consent document, which doesn't seem that important. A UMA service can create a document in any form, so using a standard form seems little value when nothing else reads it.

3) There are other models being discussed, such as one where UMA is predominant, but where FHIR Consent is used to define the UMA 'scope'.  This model allows the 'scope' value to be a structured JSON object that contains rules and obligations. Thus the Authorization Decision doesn't need to be a pre-configued simple string. 

The combined approaches seem more academic than realistic. They are helpful to moving the state-of-the-art.

Conclusion

All of these approaches are under-development, and need people to try them out.  I will be further defining these, and will continue to help develop them. I do NOT expect a winner soon, and actually expect ALL of them to be used significantly. In the coming articles I will explain the benefits that each model brings, these benefits are critical in specific context.

Other articles on Privacy Controls

Saturday, August 20, 2016

Consent Process

Too often Consent is seen as a one-time thing. It is far more than this. Here is an infographic.

My point with this is that there many big steps:

  • Defining Policy
  • Act of Consent from the Patient
  • Enforcing Consents 
  • Notification of Use
This graphic tends to imply these are four clean steps that are done in sequence. When actually they might happen in various sequences.

For example: Imagine a Research project that wants to use specific kinds of data. They do need to have their policies defined. They might have scouting authorization to find potential cohort participants. This scouting, only returns potential pseudonymous identifiers, no data. This access to find the potential cohort results in a notification to the patient that a specific Research project is interested. This notification encourages the patient to review the terms of the Research project and agree to participate. Thus now the Research project can access the data. 

More details to come. Articles on the Patient Privacy Choice topic, including past and future.

Aiding Online Informed Consent using Social Commentary

I am excited that the topic of Consent is becoming so important that it is getting attention. Evidence this study that I think is really exciting, while embryonic. The study admits this, so they are not ignorant.  They are addressing the fact that on-line consent is a very disappointing experience, and try something to see if it can make that better.

What they have done is to ask if the systems we use on the social network can be used to help people understand the terms of a Consent. Like we use on Amazon to determine if we want to purchase an item.  Like on Facebook to encourage reading of an article. Like on YouTube to applaud good work, or not.

The idea is to allow those that are being asked to Consent, to ask questions, review others questions, review others observations, etc. Their results are encouraging.

However their results, by their own admission, are potentially contrived. We all know social systems that go horribly wrong. They get hijacked by people with an agenda; both positive and negative. They get filled with useless babble. They either provide too much anonymity or not enough identity to be trustworthy. Interesting that the method of gaining Consent might have Consent (Privacy) issues as well.
All of these things are - yet to be solved.

I like their overall premise, that moving from in-person based Consent, to a purely on-line web-form, will drive for less 'informed' consent, and possibly less participation. So they are trying to discover ways to make purely on-line experience better.

Their paper is long, but very nicely comprehensive.

Background: Social media, mobile and wearable technology, and connected devices have significantly expanded the opportunities for conducting biomedical research online. Electronic consent to collecting such data, however, poses new challenges when contrasted to traditional consent processes. It reduces the participant-researcher dialogue but provides an opportunity for the consent deliberation process to move from solitary to social settings. In this research, we propose that social annotations, embedded in the consent form, can help prospective participants deliberate on the research and the organization behind it in ways that traditional consent forms cannot. Furthermore, we examine the role of the comments’ valence on prospective participants’ beliefs and behavior.
Objective: This study focuses specifically on the influence of annotations’ valence on participants’ perceptions and behaviors surrounding online consent for biomedical research. We hope to shed light on how social annotation can be incorporated into digitally mediated consent forms responsibly and effectively.
Methods: In this controlled between-subjects experiment, participants were presented with an online consent form for a personal genomics study that contained social annotations embedded in its margins. Individuals were randomly assigned to view the consent form with positive-, negative-, or mixed-valence comments beside the text of the consent form. We compared participants’ perceptions of being informed and having understood the material, their trust in the organization seeking the consent, and their actual consent across conditions.
Results: We find that comment valence has a marginally significant main effect on participants’ perception of being informed (F2=2.40, P=.07); specifically, participants in the positive condition (mean 4.17, SD 0.94) felt less informed than those in the mixed condition (mean 4.50, SD 0.69,P=.09). Comment valence also had a marginal main effect on the extent to which participants reported trusting the organization (F2=2.566, P=.08). Participants in the negative condition (mean 3.59, SD 1.14) were marginally less trusting than participants exposed to the positive condition (mean 4.02, SD 0.90, P=.06). Finally, we found that consent rate did not differ across comment valence conditions; however, participants who spent less time studying the consent form were more likely to consent when they were exposed to positive-valence comments.
Conclusions: This work explores the effects of adding a computer-mediated social dimension, which inherently contains human emotions and opinions, to the consent deliberation process. We proposed that augmenting the consent deliberation process to incorporate multiple voices can enable individuals to capitalize on the knowledge of others, which brings to light questions, problems, and concerns they may not have considered on their own. We found that consent forms containing positive valence annotations are likely to lead participants to feel less informed and simultaneously more trusting of the organization seeking consent. In certain cases where participants spent little time considering the content of the consent form, participants exposed to positive valence annotations were even more likely to consent to the study. We suggest that these findings represent important considerations for the design of future electronic informed consent mechanisms.
J Med Internet Res 2016;18(7):e197
doi:10.2196/jmir.5662
http://www.jmir.org/2016/7/e197/

Wednesday, August 10, 2016

Certificate validation - use of CN

I got a question of what should be done with the Certificate CN (common name) value. Specifically should a system make sure that the TCP/IP Connection aligns with the CN hostname value?

The short answer is: ignore the Certificate CN; as the TLS authentication mechanism has already done cryptographically secure authentication. Adding any use of DNS to this will only result in false-negatives, meaning it will never add value but will occasionally cause you to reset a perfectly good connection.

Here is the question:
I have been trying to understand the certificate requirements in ITI-TF 2,  3.19.6.13 (Other Certificate Requirements) especially:
"The Secure Node shall not require any specific certificate attribute contents, nor shall it reject certificates that contain unknown attributes or other parameters.  Note that for node certificates the CN often is a host name, attempting to use this host name provides no additional security and will introduce a new failure mode (e.g., DNS failure). "

This requirement appears vague to me in that though it talks about CN not providing any additional security, it does not state whether we are supposed to consider it during validation or not.
I understand that may be it is up to the implementer, what I would really like to know is whether rejecting a certificate due to host name mismatch ( for e.g My repository is NOT using a certificate with Common Name = machine name/host name) should reject requests from a Client who is using my repository's endpoint (where Common Name is NOT = machine name/host name).

I would like to know how others are dealing with certificate validation and specifically Common Name mismatch errors.
1. What does "shall not require any specific certificate attribute contents, nor shall it reject certificates that contain unknown attributes or other parameters" mean?
   - If as a Secure Node, I want to only allow certificates know to me and reject the rest, is that counted as not meeting  IHE spec?
2. Are we supposed to ignore Common name mismatch errors entirely? And are they supposed to be ignored on the Client side (If I am a Document Source sending a document to a repository) or on the Server side (I am a registry accepting a query from a Document Consumer)?

I will really appreciate any more information on this.
We are trying to encourage everyone to ignore the CN, at least for authentication purposes. The reason is that the cryptographic validation that TLS has already completed is the secure authentication. The TLS authentication has already proven that the remote node has possession of the private key, and that the certificate used is a good certificate, and that the certificate is trusted directly or through a CA, and that the certificate used has not been revoked. 

The question points out that their Certificates are not putting hostname into CN. Thus if anyone tries to use the CN as a hostname they will fail a perfectly good and authenticated communications channel. This is clearly not intended or useful. So, this is a specific configuration we want to enable through encouraging not validating the CN.

Further if the CN is a hostname, looking at the CN will not fill any positive authentication function, but might cause a reset of a perfectly good connection due to a failure in the DNS lookup. DNS is not a security protocol!

Even Further; the Internet is getting bigger. The use of IPv4 often must use NAT in some regions. Use of smart clients on mobile devices can utilize multiple IP addresses over a short span of time. The use of IPv6 introduces a much more dynamic environment. 

So, ignore the CN found in the Certificate. There is enough you must do to properly implement Secure Communications, and avoid poodles.

Does this mean the CN is useless? Not really, it might be used to differentiate various remote endpoints. It will likely be used in Authorization decision. It should be used in the Audit Log. It should simply not be used in a reverse-DNS lookup for the purpose of failing a communication channel.

So the reason the technical framework is not more clear; is that we don't want to be overly restrictive and force the CN to be totally ignored. Thus all we can do is encourage with the most common and robust solution.  

Secure Communications

Saturday, August 6, 2016

HHS Fact sheet on Ransomware and HIPAA

HHS has produced an 8 page fact-sheet on Ransomeware and HIPAA that is fantastic. It is so good that I have very little to say as any emphasis I would add is already in the 8 pages. Just 8 pages, packed with very readable, reasonable, reasoned, and backed by long standing Security and Privacy
HIPAA Regulation. There is no need for new regulation, as it is indeed all covered.

Call to action

I recommend hospital leadership sit-down with the "Security" Office and "Privacy" Office; walk through this simple 8 pages. If ANYTHING in the 8 pages is surprising; then you have a big problem on our hands. There is NOTHING in this 8 pages that should be surprising.   This fact-sheet should be viewed by hospital leadership just like a contracted penetration report, except this is more well written. For example This quote from an HHS article:"Your Money or Your PHI: New Guidane on Ransomeware"
The new guidance reinforces activities required by HIPAA that can help organizations prevent, detect, contain, and respond to threats, including:
  • Conducting a risk analysis to identify threats and vulnerabilities to electronic protected health information (ePHI) and establishing a plan to mitigate or remediate those identified risks;
  • Implementing procedures to safeguard against malicious software;
  • Training authorized users on detecting malicious software and report such detections;
  • Limiting access to ePHI to only those persons or software programs requiring access; and
  • Maintaining an overall contingency plan that includes disaster recovery, emergency operations, frequent data backups, and test restorations.

Ransomeware is a Privacy Breach

The one point that did surprise me was the approach that an incident of Ransomeware is considered a Privacy Breach, unless proven otherwise. 
Some of the other topics covered in the guidance include: understanding ransomware and how it works; spotting the signs of ransomware; implementing security incident responses; mitigating the consequences of ransomware; and the importance of contingency planning and data backup. The guidance makes clear that a ransomware attack usually results in a “breach” of healthcare information under the HIPAA Breach Notification Rule. Under the Rule, and as noted in the guidance, entities experiencing a breach of unsecure PHI must notify individuals whose information is involved in the breach, HHS, and, in some cases, the media, unless the entity can demonstrate (and document) that there is a “low probability” that the information was compromised.
The point is that you must start with the conclusion that the data was breached, and prove that it was not. If the Ransomeware had access enough to encrypt, then it had access enough to have exfiltrated. 

The fact-sheet continues to explain this point, and explain it from may angles. It goes into express detail around a situation where the data that has been encrypted was already actively encrypted under the healthcare organizations encryption-of-data-at-rest. They nailed this one very nicely, and in few words.

Risk Assessment and Management Plan is not static

The one point of emphasis I would add is that the "Risk Assessment and Management Plan" that is indeed required by the HIPAA Security rule, also is required to be revised periodically 45 CFR § 164.306(e), states:
“Security measures implemented to comply with standards and implementation specifications adopted under § 164.105 [(the Organizational Requirements)] and this subpart [(the Security Rule)] must be reviewed and modified as needed to continue provision of reasonable and appropriate protection of [EPHI] as described at § 164.316.” 

Conclusion

I am very impressed and happy with all of the fact-sheets out of HHS. They have a very hard job of explaining difficult subjects to a huge and  heterogeneous. Made up of mature organizations and unprepared organizations. These fact-sheets should be viewed as an opportunity to exercise and investigate your working Security and Privacy plan.

Other articles I have on Security/Privacy Risk Assessment/Management

Wednesday, August 3, 2016

Basic Consent - a necessary first step

There are many standards efforts to develop support for Patient directed Authorization to their health data. I will be writing a few articles about these efforts.  These efforts sometimes use the term Privacy Consent, or Privacy Consent Directive, or Privacy Authorization, or Consumer Preferences, etc...

This post is about the one standards solution that is already available. IHE Basic Patient Privacy Consent. I have written about this extensively. I have spent much effort explaining why this is both: a powerful solution, and an under-powered solution. It is indeed both, and IHE knew this when it created this under-powered solution. It knew this so much that it included in the title "Basic" so that it would not be seen as the ultimate solution, but rather a simple beginnings.


I expected a replacement to BPPC to come along much sooner than now, but it has taken 10 years. That is right, BPPC was created in 2006.  It has not been upgraded until now because it filled a need, and was very clear what it couldn't solve. This does not mean that there was not solutions that solved the parts that BPPC can't solve, I know that there are many solutions that solved beyond BPPC. In fact these solutions are critical experimentation (Agile) for the new APPC profile. I am not going to explain APPC yet, simply going to note that Basic has been replaced by Advanced in APPC.

BPPC is 'clunky'; I am very clear about this. It requires pre-coordinated policies that must be statically defined, and configured into Access Control engines. Thus it is very limited as to what it can support. However this limitation supports a wide variety of use-cases. They are just pre-coordinated use-cases. Just like were supported in the paper world. So it was equally capable, and yet more capable.

Some examples where BPPC are used:

Connecticut HIE:

For release of Privileged Care information, a consent document SHALL be registered with HITE-CT in the form of a BPPC conformant document using the Opt-in for Legally Protected Data (ALL) policy. Where the consumer does not wish to have their health information available to HITE-CT PHCSs, a consent document SHALL be registered with HITE-CT in the form of a BPPC conformant document using the Opt-Out (Routine Care) and at the direction of the consumer, Opt-Out (Emergency Care). All Opt-in documents SHALL include an expiration date. This date SHOULD be recorded as two (2) years from the date the agreement is executed. All policies are global within the HIE such that an Opt-Out or Opt-In captured at one location covers all HIE member organizations. Common consent language shall be provided by HITE-CT.

Table 10.2.3-1 Patient Privacy Policies
Patient Privacy Policy Identifier OID
Use
Consent Document to be Filed
1.3.6.1.4.1. 38571.2.1.3.1
Opt-Out (Routine Care): Opt-out is specific to Restricted to viewing data registered in HITE-CT and SHALL NOT reflect restrictions pertaining to any exchanges not delivered through HITE-CT.
HITE-CT Opt-Out Routine Care
1.3.6.1.4.1. 38571.2.1.3.2
Opt-Out (Emergency Care):
HITE-CT Opt-Out Emergency Care

1.3.6.1.4.1. 38571.2.2.3.1
Opt-in for general use (OPTIONAL use where PHCS has captured or chooses to capture specific consent for HIE participation from consumer)
OPTIONAL:
Provider Generated Document
1.3.6.1.4.1. 38571.2.2.4
Opt-in for Legally Protected Data (ALL)
HITE-CT Opt-In for Legally Protected Data
1.3.6.1.4.1. 38571.2.2.4
Reflect that acknowledgement of information exchange practices has been collected from the healthcare consumer or their authorized representative
HITE-CT Acknowledgement of Information Exchange Practices
 Example: A consumer had elected to Opt-Out of sharing routine clinical health information through HITE-CT. A Privacy Policy Acknowledgement Document is submitted through the consumer’s primary care provider recording the document as a scanned document under the Patient Privacy Policy Identifier OID 1.3.6.1.4.1.38571.2.1.3.2 in the XDSDocumentEntry.eventCodeList. The documentationOf/serviceEvent is populated with an effective time reflecting the current date as the ‘low value’ and the current date +24 months as the effective data ‘high value’.

Texas HIE

As an example of how much is covered by BPPC; Texas HIE has a Privacy Policy document that is 111 pages long. Bringing together dozens of national, state, and region regulations. Addressing many different perspectives including BAA, Government reporting, and special sensitive health topics. This is the kind of thing we expected would be needed. One can't simply have a code "HIPAA" which is understood everywhere as meaning the same thing. One must always have interpretations of regulations, and that interpretation must consider other regulations, care setting, and other factors.

 Social Security Administration

Authorization to Disclose Information to the Social Security Administration (SSA) -- eAuthorization

SSA-827 Authorization to Release Information policy is: 2.16.840.1.113883.3.184.50.1.

Although this is just a 2 page form, the policy backing this form is not simple

 Conclusion

I am very proud to have been part of the creation of BPPC. I am surprised that it has taken 10 years to come up with an Advanced form. But I am very happy with how this Advanced form builds upon BPPC. I will explain this in another article.  The lesson is that we need Basic before we can get to Advanced; and Advanced still leverages the Basic. So we have advanced the art of Privacy Consent, while providing something simply Basic, while continuing to develop toward Advanced. 

This article is all about IHE Document Sharing, and not about FHIR. Yet the same lesson needs to be recognized in FHIR. We should start out Basic and then continue on to more Advanced. Same lesson needs to be recognized in HEART, with the UMA effort. We should start out Basic and then continue on to more Advanced.

Historic articles Patient Privacy controls (aka Consent, Authorization, Data Segmentation)