Pages

Friday, April 26, 2013

Privacy Consent State of Mind

The space of Privacy Consent is full of trepidation. I would like to show that although there are complexity, there is also simplicity. The complexity comes in fine-details. The fundamentals, and the technology, are simple.

Privacy Consent can be viewed as a "State Diagram", that is by showing what the current state of a patients consent, we can show the changes in state. This is the modeling tool I will use here.

I will focus on how Privacy Consent relates to the access to Health Information, that is shared through some form of Health Information Exchange (HIE). The architecture of this HIE doesn't matter, it could be PUSH or PULL or anything else. The concepts I show can apply anywhere,  but for simplicity think only about the broad use of healthcare information sharing across organizations.

There are two primary models for Privacy Consent, referred to as "OPT-IN" and "OPT-OUT".

Privacy Consent of OPT-IN

At the  left is the diagram for an OPT-OUT environment. One where the patient has the choice to OPT-OUT, that  is to stop the use of their data. This means that there is a presumption that when there is no evidence of a choice by the patient, that the data can be used.

This model is also referred to as "Implicit Consent". The USA HIPAA Privacy Regulation is utalizes this model for Privacy Consent within an organization. It is not clear to me that this HIPAA Privacy Regulation 'Implicit Consent' is expected to be used outside the original Covered Entity. It is a model used by many states in the USA.

The advantages typically pointed to with this model is that many individuals don't want to be bothered with the choice, these individuals trust their healthcare providers. Another factor often brought up is that when health treatment is needed, the patient is often not in good health therefore not well capable of making decisions; this however focuses on legitimate uses and ignores improper uses. Privacy worries about both proper and improper access.

Privacy Consent of OPT-IN

At the right is the diagram for an OPT-IN environment. In an OPT-IN environment the patient is is given the opportunity to ALLOW sharing of their information. This means that there is a presumption that the patient does not want their health information shared. I would view it more as a respect for the patient to make the decision.

This model is used in many regions, even within the USA. With an HIE this  model will work for many use-cases quite nicely. Contrasted with the HIPAA Privacy use of Implicit Consent, which is likely a better model for within an organization. The two models are not in conflict, one could use Implicit Consent within an organization, and OPT-IN (Explicit Consent) within the HIE.

Privacy Consent Policy

The above models seem simple with the word "YES" and "NO"; but this is not as clear as it seems. Indeed the meaning of "YES" and the meaning of "NO" are the hardest thing to figure out. It includes questions of "who" has access to "what" data for "which" purposes. It includes questions of break-glass, re-disclosure, and required-government reporting. The "YES" and the "NO" are indicators of which set of rules apply.

The important thing is that there are different rules. The state of "YES" doesn't mean that no rules apply, there are usually very tight restrictions.  The state of "NO" often doesn't truly mean no use at all. There is usually some required government reporting, such as for the purposes of protecting public health.

Privacy Consent: YES vs NO

The reality of privacy consent is that there will be a number of patients that will change their mind. This is just human nature, and there are many really good reasons they might change their mind. A patient that has given OPT-IN authorization might revoke their authorization. A patient that has indicated they don't want their data to be shared might decide that they now do want to share their data. For example as a patient ages they recognize that they can be best treated if all their doctors can see all the other doctors information.

Thus what seems like a very simple state diagram for OPT-IN or OPT-OUT; one must recognize that they need to support transition between "YES" and "NO".

Privacy Consent of Maybe

Lastly, we all recognize that the world is not made up of 'normal' people. There are those that have special circumstances that really require special handling. This I am going to show as another state "MAYBE". This state is an indicator, just like "YES" or "NO", but in this case the indicator indicates that there are patient-specific rules. These patient-specific rules likely start with a "YES" or a "NO" and then apply additional rules. These additional rules might be to block a specific time-period, block a specific report, block a specific person from access, allow a specific person access, etc. These special rules are applied against each access.
Note that the state diagram shows transitions between all three states. It is possible that one goes into the "MAYBE" state forever, or just a while.

Privacy Consent is a Simple State Diagram

I hope that I showed that Privacy Consent is simply state transitions. I really hope that I explained that each state has rules to be applied when a patient is in that state. Implicit (OPT-OUT) and Explicit (OPT-IN) are simply an indicator of which state does one start in, which state is presumed when there is an absence of a patient specific decision. The rules within each state are the hard part. The state diagram is simple.

Other Resources


Patient Privacy controls (aka Consent, Authorization, Data Segmentation)

Access Control (Consent enforcement)



Wednesday, April 24, 2013

mHealth Solution

I have been involved in many efforts targeting the mHealth use-case. I have not been involved in all of them. I am sure no-one has been involved in all of them. Specifically I have been involved in the efforts in IHE, HL7, and DICOM. I have mostly spent my time on Security and Privacy; but also had my hand in some of the interoperability aspects. This means that I have a perspective, and know that it is only my perspective. This means I know that I don't know it all. This is my blog however, so you should already know that.

What is mHealth? 

mHealth is a highly over used term now days. The reason it is over used is because it is a term that is cool. How it gets abused is because the term is not defined. Because it is undefined, it gets to multiply the excitement that anyone has around the term, without focusing any progress. This means that 10 people who have 10 different perceptions of what the term feed off of the excitement of the other 9, while getting none of the benefits of collaborative design. Thus this term is burning lots of excitement without making as much progress as it should. To show just how divergent these perceptions, here are some I have heard:
  • mHealth means that the healthcare-data is highly movable and thus can flow to where ever it is needed
  • mHealth means that the way I access my health-data is through a mobile device
  • mHealth means that I as a patient can pull copies of my data a move it wherever I want 
  • mHealth refers to sensors that I carry on my body all the time, such as fitbit
  • mHealth means that my consent automatically applies to where ever my data is accessed
Added June 2013
  • mHealth refers to the mobile medical devices that move between facilities on trucks (e.g. CT)
  • mHealth refers to the use of barcode and such to track healthcare technology movements

The mHealth Solution

You can see from these 5 that in some cases the data are 'mobile', others the device used to access the data are 'mobile', others the patient is 'mobile', and others the sensors are 'mobile'. These are just four different view points. YES, they could all be the same. BUT the solution space for these are not working on all of them, or even more than one of them at at time. Just some examples of the solution spaces that are working on these issues, but not necessarily the same.
These are not all the efforts, nor all the perspectives on mHealth. None of these perspectives are wrong, and all of them are proper things to be doing.

Consent portability

I do have to caution that the consent moving to the data is the least mature. Mostly because there are far too many moving parts being worked on. That is that the architectures for how data are moved and accessed are not yet stabilized. Some are moving data in e-mail, others using REST, others using SOAP, others using USB/CD-ROM,  and others using proprietary means. Trying to come up with a single way to control access is hard to, and trying to control those is futile at this point. This doesn't mean there is nothing going on, there is much going on

mHealth is anything you want

This is not a fundamental problem. This is not a problem that will cause failure to meet mHealth expectations. I want to urge understanding that the term is not well defined, and thus the one you are talking to might be thinking something totally different. What they are thinking is not wrong. It is just important to be sure that you understand their perspective. Thus the mHealth solution is many, not one.

See also

Friday, April 19, 2013

Direct incompatibility with off-the-shelf e-mail

Why choose a popular underlying standard if you are not going to leverage it? Surely you should not make explicit changes that break it.

The Direct Project choose to use e-mail, and the security layer S/MIME. This choice was due to the wide spread use of e-mail. Wide spread use in the case of e-mail can be proven by the very fact that today e-mail is still the most used protocol on the internet. This in the face of those that would like to consider "the Web" as pseudonymous with "the Internet". The statistics say that it is closer to "e-mail" is pseudonymous with "the Internet". Actually they both combined make up most of the internet.

The Direct Project expectation was that healthcare should only need to specify the trust framework -- see DirectTrust.org for one organization trying really hard to make this factor a reality. This trust framework would allow a sender to be sure that what they are sending can only be seen by the one they are sending it to, and no-one in between This trust framework would allow a receiver to know that the content absolutely came from the one indicated as the sender, and no-one in-between This trust framework is critical to success. But this trust framework is 99% policy. The technology portion of this trust framework is all standards based and embodied in the common use of S/MIME and the PKI that supports it.

Direct Specification is NOT leveraging commonly implementations of S/MIME e-mail!

I have written on this topic before. At that time it was about the specific rules on how one must DISCOVER the certificate of the recipient you are sending e-mail to: MU2 - Why must healthcare use custom software when Thunderbird and Outlook would do? In that article I explain that this requirement was overly restrictive. It forces specific certificate distribution model that that is unique to healthcare. It doesn't support the Trust Framework. It just gets in the way of using off-the-shelf software. Thus forcing healthcare to use custom software.

Direct Specification forces case-sensitivity when none is necessary!

Now there is an effort to force case-sensitivity to Direct Project address. This technically is specified in the underlying standard, but it is not always implemented this way. Let me explain. The underlying e-mail specifications do indicate that the first part of the destination address shall be case-sensitive. This was because some destination systems are indeed case-sensitive. However not all destination systems want to be case-sensitive.

It is true that case-insensitivity is ambiguous once you leave the classical ASCII character set.  Therefore case-sensitivity is indeed more easily proven, and thus interoperable. However 'allowing' case-insensitivity when the destination system wants to allow it, should be allowed. 

What is happening is that there are test-tools being developed to test implementations of Direct. These test tools are being written strictly. This strict interpretation of the standards is a good thing for test tools to do. But in some cases systems need to be allowed to be more liberal in what they accept. Destination systems should not be forced to be so restrictive. This is an application of the Robustness Principle, also known as Postel's law, after Internet pioneer Jon Postel - "be conservative in what you do, be liberal in what you accept from others" .

We MUST be reasonable. The case requirement is more focused on being case-preserving, so that an endpoint ‘can’ be case-sensitive. That is to say that senders and intermediaries must preserve the case. To require that the endpoint MUST be case-sensitive is overly restrictive. This would cause many common email infrastructures to be declared non-compliant. Most off-the-shelf e-mail treats the whole address as case insensitive. This declaration of non-compliance would come at no benefit, and would limit the market space available for healthcare use.

Direct Continues to require custom software for healthcare.

This is absolutely against the values that the Direct Project included during the development. The reason to choose common e-mail transport was to leverage the large body of infrastructure and software already available. Using custom software increases costs, and makes healthcare re-develop tools that have been developed over decades of advancements in e-mail, and at no added value.

Thursday, April 18, 2013

Safety vs Privacy


What do you conclude when looking at this picture?

The solution is:
a) Make the wall shorter
b) Make the wall taller

Those with a strong privacy background recognize this as a Privacy violation. Very clearly the wall is not tall enough. Clearly the female is to be protected against the male actor. Clearly the wall is defective and needs to be taller.

Those with a strong safety background recognize this as a safety concern. Very clearly the wall is not short enough to enable safe conversation between these two. Indeed the safety assessment doesn't apply ethical characteristics to the female or male image.

My viewpoint is to understand the use-case. What are these two trying to do? Is this a case of (a) or (b). Just because the image is made up of the images used for bathrooms does not mean that the image is of a bathroom use-case. Knowing the use-case is the only way to understand if this is a privacy violation or if this is a legitimate discussion over a wall that is too high.

Indeed the solution might be BOTH. The wall is indeed there for privacy purposes, and it is failing. There is also a safety concern as the wall is not tall enough to prevent someone from putting themselves and others at risk of harm. This shows that not always are privacy and security risks at odds. Sometimes they can be solved harmoniously.

Thursday, April 11, 2013

Google creepy is not the same as Facebook creepy

Google NOW has brought a totally new form of data analytics to my fingertips, and I like it. I am actually handing more information to Google than I would have, just to get this capability. I like that driving routes are suggested, with start times that match my appointments. I like that it knows just the right sports to inform me about, even when that sport is Hockey which the news media seems to know nothing about. This level of leveraging all the information that Google can find on me to bring me value is fantastic. This is what differentiates Google creepy from Facebook creepy.

When Facebook indicates that they are going to start to pull some new form of information, I don’t get the feeling that they are going to do this for my benefit. It is very clear that Facebook is gathering more information for their own benefit. Even their insistent plea to harvest my address-book. I am not going to expose my friends to more advertising through Facebook.

The first evidence that I as a user get from the gathering of data is valuable in the case of Google, and punitive in the case of Facebook. I know that Google is using my data to make money, I am a strong believer that if you are not paying for something, then you are the product and not the customer. The fact that Google makes money with my data is less creepy because Google gives me value. In fact Google gives me so much value that I go out of my way to give it more data. Whereas Facebook creeps me out so much that I avoid telling it many things.

Perception is more powerful than reality. The perception of value, even if it is not truly valuable, is what is important. The fact that Google Now gives me driving directions automatically rather than me doing a Google Search is a small step, so the actual value is small. The actual spend by Google is small. The perception of value is big.

Goldilocks Governance

Healthcare can learn from this. The value of a Health Information Exchange is great (What is the benefit of an HIE), the perception of creepy can also be great. Trust and doing what the consumer ‘expects’ is the bridging factors. The patient wants their data to be available to those that can provide the patient value. The patient wants their data to be protected against those that provide the patient no value. I coined the term “Goldilocks Governance” for this. Not too tight, not too loose, but just right.

This is also a consistent Privacy approach as outlined by the USA Whitehouse consumer privacy principles that was published just last year. This privacy philosophy recognizes that the consumer understands the context of their interaction as defined in "Privacy As Contextual Integrity" by Helen Nissenbaum. Which indicates that consumers do understand that their data will be used in specific ways, clearly in healthcare for treatment and billing, but also in healthcare they understand their data is used for public health benefits and other normal operational ways. This is otherwise described as the "Consumer Should Not Be Surprised." Meaning they should not be surprised that their data is used in some specific ways; yet also that it is right for them to be outraged at inappropriate uses of their data.

Monday, April 1, 2013

HIE Patient Identity problem

I will assert, yet again, that the Patient Identity problem that HIEs are having is not technical. The technical standards are clear and used quite a bit (PIX, PDQ, XCPD). The problem is in policy, where there are many different policies in use today. These policies define what a source organization is allowed to divulge, and when they can divulge something. These policies define the parameters of the matching algorithms regarding false-positive, false-negative, and risk tolerance to keeping readable attributes vs hashes. Often times the policy problem is rooted in misunderstandings of the regulation.

The biggest disconnect I have seen is simply that all the organizations involved don’t have the same set of attributes to the same level. For example one of the big problems seen is that the SSN matching is made almost useless because the some organizations only knows the last 4 digits. One of the other factors that we will see emerge even more is that the organizations today never worried about external matching so they don’t have the ability to now ask for, or record, more information. Some have suggested that the patient could/should provide a voluntary ID such as the patient’s ‘direct’ address. The existing systems don’t know where to save this, and they don’t know how to include it in a matching request. The Interop standards (PIX, PDQ, XCPD) handle these just fine, they are simply 'other' external identifiers.

These items are very much what the security world would view as ‘provisioning level of assurance’. That is how sure is the cross-reference. In this way this overlaps very well with the greater NSTIC effort.

The hardest policy to get agreement on is to what extent is a request for location of records (just the indication that a location has records, not an indication of what kind of records), is a ‘disclosure’ that is ‘acceptable or not’ especially when the request is made under the workflow of ‘treatment’. This is an important policy to allow, as one must first get a positive cross-reference before one can ever know if the consents have been granted. Surely if there is no positive consent then no match should be made, but if there is any fraction of a positive consent (including emergency override allowed) then a match needs to be allowed. Of course all matches need to be recorded and reported on an accounting of disclosure (something always forgotten).

This is one of the main reason why PIX/PDQ/XCPD are totally different steps than Query for information (XCA, XDS, MHD). This is not to say that a cross-reference match should not be considered a privacy concern, but rather to recognize that for specific PurposeOfUse (Treatment) might justify some risk. Where this risk is managed, not totally unmanaged. Where the impact of this risk is controlled to just cross-reference. The hard part is that some viewpoints of policy totally forbid even this level of disclosure. Some negotiation seems to be logical.

And, yes I totally agree that patients that want nothing to do with this should not have even this level of exposure happen. As I indicated above a ‘opt-out-completely’ is indeed an exclusion. Sensitive topics, also well segmented. I only look to this first step cross-referencing to be appropriate when there is some form of possible positive communication possible.

I do think that this is a reasonable thing for NSTIC to look at. However it is a somewhat of a very different problem from the original intention of NSTIC. This is likely why you see organizations solving this problem behind closed doors.


Patient Privacy controls (aka Consent, Authorization, Data Segmentation)