Tuesday, February 2, 2016

Patient as a User - becoming "known to a practice"

The current practice is ‘in person proofing’… as the first encounter with the patient is as a … patient… Now many patients are not at their ‘best’ when they first appear, so the understanding of their identity evolves over the first hours and days and weeks. Thus in healthcare practice we often know the patient by many identifiers that we have either merged or linked. And there are cases where a merged or linked patient needs to be unmerged or unlinked. Very messy business. Ultimately the patient gets billed for the services they have received, and the identity gets confirmation that they paid, thus stronger. This is just a discussion of the patient id, not the patient as a user.  See my topics on Patient Identities.

Patient as User

The patient as a User usually starts with this in-person relationship. Most often the healthcare organization uses the identity they know, and the billing address to send them postal-mail (covered by strong fraud laws). This kickstarts an online confirmation workflow that binds the human patient identity to a user identity. Unfortunately this often is by way of a hospital managed user account, and not an internet friendly OAuth identity.

There are increasing cases where an internet friendly OAuth identity is used. However these (Facebook, Google, etc) are very low assurance identities, as anyone can claim to be anyone. To use these in healthcare we elevate the LoA using the above described online confirmation workflow so that the result is an identity that the patient wants to use, elevated to a higher assurance level through the healthcare driven identity confirmation workflow.See my User Identity topics.   Specifically getting to mHealth solutions - real People

User Identity and Authentication 

Patient Identity

Thursday, January 28, 2016

My Testimony to the ONC API Task Force on Privacy and Security

I gave this verbal testimony to the ONC API Task Force on Privacy and Security today... On World Privacy Day.

I want to thank the Task Force” for inviting me to speak on behalf of GE Healthcare. I am also a co-chair of the HL7 Security workgroup, a member of the FHIR Management Group (FMG), the lead in IHE-Mobile Health Documents (MHD) , and active member and advocate of HEART .

I am pleased at the fantastic testimony this committee is receiving.

GE Healthcare is and has been a strong supporter of standards-based Interoperability, as it enables us to be a global healthcare solution provider. Any customization or specialization for any specific region or provider organization is effort that is counter to this standards-based approach. I am glad to hear others express this same solution for their own various reasons.

GE Healthcare has had APIs as part of our systems for decades. Most of these are the bread-and-butter of any healthcare organization's network backbone, drawing on HL7 and DICOM, using IHE Profiles. Many of our IT products have had web friendly APIs for some time. We have a limited use of FHIR being used at a few pilot sites, limited more due to the developing nature of FHIR right now. All of these APIs, old and new, guide my testimony as RESTful APIs don’t fundamentally change Privacy or Security, they do elevate the threat.

We place no special qualifications upon our customers to gain access to the API or documentation. Use of the API is restricted using Authentication and Authorization mechanisms.

Privacy concerns mostly involve shared roles and responsibilities between healthcare provider organization and GE as the vendor providing the system. Given that GE doesn’t have direct relationship with the patient, where the provider organization does have a relationship with the patient and controls the policies and use, the privacy concerns must fall to the provider organization. Given this reality, we do have a Privacy-By-Design approach built into our product development processes so that we build into our solutions Privacy enabling technology.

The main problem that we see is the very wide variation in security and privacy maturity of healthcare provider organizations. The very large organizations have mature capabilities and have policies, procedures and technology solutions available on which we can build a trust relationship. The vast majority of healthcare provider organizations, however, don’t yet have the full range of these operational aspects. Add to this the variation and layers in legal and regulatory policy requirements.

Technology can be used to resolve Privacy and Security issues, but they must be used in a Policy domain. Policy issues focus on the roles and responsibilities for the various requirements regarding privacy and security operation: Identity Management, Authentication, Consent, Accountability, and Incident Management. These are types of Policy and Procedures that we often don’t find at healthcare organizations.

  • Patient Identities are an especially problematic area of identity management. With no controlled Patient Identity, privacy cannot be managed or assured. This challenge applies to data management, consent management, and patient access management.

  • User Identities need to be managed by the healthcare provider organization. The healthcare provider organization must take responsibility for the functionality of User Provisioning, user De-Provisioning, Authentication, Account Recovery, Account Suspension, Account Deletion and Account Monitoring. Vendors and applications can leverage the use of standards like OpenID Connect using OAuth.

  • Incident detection and response is often never even discussed. Sharing of responsibility, identifying and assigning specific roles and tasks is critical.

  • APIs imply a more agile ability to hook applications to services. This brings up Policy, Procedure, and Technology questions around how we identify trustable applications, or trustable services; and how those trusts translate into technical trust with maintenance of that trust in a way that supports quickly recognizing broken trust and reacting appropriately.
Overall, the security and privacy challenges associated with APIs reflect and are a consequence of the wide variety of maturity of healthcare provider organizations in both policy and technology. The additional variations in regional policies add to this complexity. Complexity, uncertainty, and variability need to be controlled for Privacy and Security to be successfully achieved. These challenges clearly need effective and timely consideration given the current policy drivers to broad and deep API use in healthcare.

Thank you.

My full written testimony is available along with the others at the API Task Force site

One thing that got clarified in the comments is that I think we need to greatly constrain what we are trying to work on, so that we can make progress on something that helps many people. We then work on the next layer of complexity. Then the next layer of complexity. Josh asked me what I though this constrained use-case was. My answer is quite simple, access to Documents. A FHIR based API that gives the patient access to all the Documents they have available. This would leverage the IHE profiles (MHD, PIXm, PDQm, IUA, and consent). This is indeed small step, but it is one that would allow us to focus on User Identity, Patient Identity, Consent Authorization, and API access. It could scope out access by anyone other than the patient, a good second phase. It could scope out the patient directing the data to another location, which brings in trust issues. Focused, so that we can focus on the problems to be solved. Yet the focus is a useful thing, the focused thing gives us a Security and Privacy framework that can support other access, the focus that gives us a simple FHIR API to use, a focus that can also be an API to an XDS or XCA environment. Focused, but stepping stone.

Also I think I might have been misunderstood with the last question from Leslie Kelly Hall. I think she and the audience might have gotten the impression that I was against giving the patient access, transparency, and control. I was not against this. I emphasized this quickly at the end, but one never knows how that gets understood. The point I was making is that continued delay to make the perfect solution is delaying access by those that want access.  We should not restrict access to the many because a few (growing group) want to have fine grain controls on all uses of their data. I want to get the data, in full form, to the people. What they do with the data once they have it is up to them. I am very much involved in standards efforts to create a process for fine-grain control, however I am frustrated as anyone at how many people cant get a full copy of their data. I want stepping stones.


Monday, January 4, 2016

FHIR Oauth Scope

As FHIR matures, the security topic becomes more and more important.  I participate in HEART, an effort hosted by the OpenID community including an impressive set of experts from the OpenID, OAuth, and UMA world. They do need more participation from healthcare, it is hard to give everyone that needs attention the full attention they need.  HEART has some foundational profiles ready to be used
HEART profiles for review, comment, and approval

So the next thing up for discussion is a set of OAuth 'scope' values. A 'scope' is a way for an App to ask for less rights than the user holds, and is a good way to limit the damage that an App can do. So the question really is in what ways would it be appropriate to cut away rights that a user might hold.
The is something that has not yet been discussed in any useful detail inside of HEART. In fact the specification they have "FHIR OAuth 2 " is not open for review, yet. This specification is mostly derived from what SMART supports today. It is made up of  a set of strings that represent a few FHIR resources. It is not a complete list of FHIR resource types. This list was simply an initial attempt at coming up with a set of scope values. The list is a logical thing that someone would create given simply that FHIR is based on REST. Meaning this is a typical list for any ‘normal’ RESTful api.


This focus on FHIR resource types has the problem that in healthcare it is not the type of resource that differentiates between access allowed vs access denied. There are some FHIR resources that typically just carry data, Such as Organization, HealthServices, Location, Device, ValueSet, Conformance, etc. These resources don't carry varying sensitive information.
However Resources like CarePlan, Medication, Observation, DiagnosticReport, and others carry data that can vary widely on how sensitive it is. 

Normal is normal for Healthcare data

These Resources might be carrying what most people consider "Normal" healthcare information. Note that the word "Normal" is relative to all healthcare information, not a label relative to all information. Healthcare information, even Normal, is considered "High Risk:" overall. 

Beyond Normal

There are sensitive health topics: HIV status, drug abuse, 

Finding Normal

This is not an easy tag to set on data, so unfortunately most data is marked "Normal". Which is potentially not wrong, just not very helpful. I given advice in  How to set the ConfidentialityCode

sensitivity evaluation along a vector

What I am looking for is a way of saying "I want to be using data that is Normal or less". The _confidentiality codes are defined specifically to be a scale of not sensitive, to less sensitive, to normal,  to high sensitive, to too hot to handle. This was an explicit exercise and was done in concert with ISO 13606 so that we would have a linear assessment of risk. 

Purpose Of Use

We often focus too much on treatment workflows. Where there are many other reasons why people, applications, or services might want access to the healthcare data. This is represented by the Purpose Of Use, using the PurposeOfUse vocabulary as a starting point. This allows for the normal "Treatment", or "Billing" but also includes in the vector marketing, legal, public health reporting, eligibility, etc.

the REST

I don't mind including the classic REST viewpoint. I just don't think it is sufficient. So I would include the ability to limit the scope based on the REST operators and the FHIR Resource types.

Proposal (for discussion)

purposeOfUse “:” _confidentiality “:” resource “:” action “:” Patient

  • purposeOfUse -- value from the PurposeOfUse vocabulary
  • _confidentiality -- highest value from the _confidential vocabulary
  • resource -- FHIR resource type from the resource-types value-set
  • action -- RESTful verb (CRUDE) from the restful-interactions value-set
  • Patient -- URI to the Patient resource identifying a specific patient
  • where any can be “*” to indicate not requesting a constraint.
Further note that multiple scopes can be indicated with a "," separator. 

Further note that the authorization server can downgrade even more the scopes that were asked to the scopes that were granted. The OAuth specification doesn't explain how this is done, just that it is allowed. One example would be where the authorization server knows the app by identity, and thus restricts scopes.

Break-Glass use-case

This proposal sets up one model to support ‘break-glass’ by first asking for "Normal" data, but when a break-glass justification exists then asking for “Restricted”.  I know I need to explain this more, but this is not the topic of this blog post.

Privacy Consent Directive

I expect that a Privacy Consent Directive might also be a useful vector through the Scope. That an app could say they only want the access rights granted to the user through a specific Privacy Consent Directive. This might be especially useful when the patient can actively grant one-by-one authorizations. 

I didn't include this in the proposal because there is active work on FHIR Privacy Consent Directives, and equally interesting HEART efforts to leverage UMA.


This is in no-way a conclusion, but a proposal for discussion.

Historic Blog Topics

Thursday, December 31, 2015

Blog review of 2015

I blogged half as many articles as last year, and yet my readership only dropped by 16%. I am amazed at all you loyal readers. I wish I had more than 28 blog articles this year to give you. Falling mostly in the category of FHIR, Consent/AccessControl, or De-Identification. I hope that they were useful. Thanks.

  1. Break-Glass on FHIR solution
  2. Break-Glass on FHIR
  3. HEART profiles for review, comment, and approval
  4. Building a MHD Client before MHD is DSTU2 aligned
  5. IHE updating FHIR Profiles to align with DSTU2
  6. FHIR Security initiatives
  7. FHIR does not need a deidentify=true parameter
  8. What is MHD beyond XDS-on-FHIR?
  9. Searching for an ATNA Audit Record Repository
  10. MHD Connectathon Results
  11. FHIR Security: Do (Not) Worry
  12. FYI: Update on the creation of Joint workgroup between #IHE and #HL7 including #FHIR topics
  13. IHE MHD and DSG now open for Public Comment
Consent & Access Control
  1. Guest Post: Use-Case - Security Audit Prompts Investigation
  2. Don't disassemble ATNA, what you are looking for is there.
  3. Where do I record the Reason that an auditable event happened?
  4. How to set the ConfidentialityCode
  5. Strawman on Consent Directive
  6. Privacy Principles
  7. Why Mutual-Authorized-TLS?
  8. TLS (not SSL) Connectathon trials and tribulations
  9. Applying CyberSecurity Standards to Medical Device Design
  10. FHIR Security initiatives
  11. FHIR does not need a deidentify=true parameter
  12. FHIR Security: Do (Not) Worry
  1. De-Identification for Family Planning
  2. NIST seeks comments on De-Identification
  3. Is it really possible to anonymize data?
  4. FHIR does not need a deidentify=true parameter
  1. Response to Keith's ask on my theory of Interoperability
  2. IHE FormatCodes are mandatory
  3. In Wisconsin we have Interoperability
  4. IHE MHD and DSG now open for Public Comment

Sunday, December 27, 2015

Break-Glass on FHIR solution

I explain the use-case and environment behind Break-Glass on FHIR. In there I explain that it is unusual to need Break-Glass, but that it is still an important capability in healthcare.  In this article I will outline a few solutions that exist, and hint at some other solutions.

This solution is based on a Client/Server relationship where the security subsystem is managing Access Control between the Client and the Server.. This diagram and definitions from the FHIR specification

The consumer that is using a healthcare related system
The client application the user is using (application, mobile app, website, etc.)
The security system (authentication and access control)
The clinical/healthcare repository

Notify that Break-Glass 'could' be used.

This is not specifically necessary, as a user/system could always indicate that Break-Glass is being invoked. If it is not authorized, then this request would be rejected. If it is authorized by no new information then nothing more is returned. The problem is that without a way to notify that break-glass could be used, then the degenerate system results: Either normal RBAC, or it is always an emergency. Thus for true Break-Glass, one really needs a way to indicate that information is being withheld that could be accessed if Break-Glass was declared. Note that this notification should not be used when there is data being withheld that is not accessible under Break-Glass.

The way that is in the FHIR Specification today is to include in the OperationOutcome.issue.code the value "suppressed", with severity at "informational".  This would indicate that for a normal request that the normal results were returned (Informational but suppressed). This does require that suppressed is not used for any other purpose. This is not obvious today, but could be an operational requirement in a specific environment, likely under some Implementation Guide.

As discussion on the FHIR mailing list have shown, not all operations can easily have both success and also have an OperationOutcome resource, so this model only works where OperationOutcome can be carried on the Response.

Indicate that Break-Glass is being used.

On the Security-Labels page is a proposal for how to indicate that Break-Glass is requested. I don't recall reviewing this text, so it was a surprise when Grahame pointed it out to me. It seems odd to be on the Security-Labels page rather than the Security page. It doesn't even use a security label.

This solution proposes that a URI could be defined to indicate that "Break-Glass" is being requested. This URI is then represented in the HTTP Request as a web category . 

I would see this as more experimental, but given that it is in the specification today, I must at least acknowledge that it is more than just something for people to experiment with and give us comments. That said, if you have comments, I would be very happy to receive them. 

Audit Log that Break-Glass has been used.

When Break-Glass is used it is important to record in the audit log that Break-Glass was used. This triggers the Privacy and Security office working with the Clinical Safety office to investigate if it was appropriate use of Break-Glass. In FHIR AuditEvent, there is a defined way to indicate that Break-Glass has been used. Benefit of basing this on ATNA. Here is the critical aspects:
AuditEvent.type --> 110113, Security Alert
AuditEvent.subtype --> 110127, Emergency Override Started
AuditEvent.recorded --> When it happened
AuditEvent.agent --> Who declared break-glass
AuditEvent.agent.location --> Where is this agent
AuditEvent.agent.policy --> Policy enabling break-glass
AuditEvent.outcomeDesc --> Free-text explanation of  why

AuditEvent.purposeOfEvent --> Why break-glass(ETREAT)

Where an entry in AuditEvent.outcomeDesc, could carry the 'text' description that the user is prompted to enter. This is a common UX for Break-Glass, where the user must type in a free-text explanation of why they feel it is warranted to 'break-glass'.

Followed later, hopefully, by 110138, Emergency Override Stopped . This is not always possible to know, as not all user experiences are specifically session oriented. But often times the user experience is clear when the event starts and when it stops.

Future experimentation on Break-Glass

This is a good experimentation topic. I don't think the best solution has yet been found. So here are a few alternatives to play with. 

Using Security-Labels

The security-labels include the full vocabulary from the Healthcare Privacy and Security Classification System (HCS), so there are security tags that can be used to indicate for each Resource instance if it is "Normal" or "Restricted". Thus the data that falls into the "Break-Glass" use, would be marked "Restricted", while "Normal" (or less) would be available for "Treatment". 

This is the most likely way to identify information that should be blocked except for "Break-Glass", but is not specifically necessary. This solution does require well managed tags on all data.

Using OAuth for Break-Glass

One thought I have is to leverage OAuth 'scope'. In normal operation one would always ask for an OAuth token using a scope value that limits access to "Treatment", which would be 'normal treatment'. When needing to declare Break-Glass, one would ask for a OAuth 'scope' with "Emergency Treatment". In this way the OAuth authority can reject, because the user doesn't own the rights to Break-Glass, or if it does return the security-token it is an indication to the Server that Break-Glass has been declared and given to the user.

Using side-channel requests

A classic solution is to provide an alternative service that one could query to see if information would be suppressed. You would send the request  you would like to ask a Server, and you get back an indication of if information would be suppressed without Break-Glass. The problem this has is that it requires another round-trip.

ConclusionNot done yet... need experimentation and lessons-learned sharing... so please share.