Saturday, May 21, 2016

A turning point for Privacy in America?

Out this week by Pew Research is this article that I find so amazing while at the same time there are so many instances where the public appears to be willfully giving away their Privacy. The Pew Research output says enough. Not much more I can say, except I am excited we might be turning the corner. Here is just the first paragraph, where it is clear no turning point has yet happened, but an awareness is!  An awareness of many of the Privacy Principles, not just confidentiality.
The cascade of reports following the June 2013 government surveillance revelations by NSA contractor Edward Snowden have brought new attention to debates about how best to preserve Americans’ privacy in the digital age. At the same time, the public has been awash with news stories detailing security breaches at major retailers, health insurance companies and financial institutions. These events – and the doubts they inspired – have contributed to a cloud of personal “data insecurity” that now looms over many Americans’ daily decisions and activities. Some find these developments deeply troubling and want limits put in place, while others do not feel these issues affect them personally. Others believe that widespread monitoring can bring some societal benefits in safety and security or that innocent people should have “nothing to hide.”

Some of my Privacy blog articles

Wednesday, May 18, 2016

Healthcare Blockchain - Big-Data Pseudonyms on FHIR

Grahame challenged us all to think about a realistic use-case for blockchain technology in Healthcare.

Blockchain is a hugely hyped technology, because of the excitement of bitcoin. The technology is really not new, it is just a special mixture of crypto technologies, not unlike Digital Certificates; except rather than proof through decoupled proofs, blockchain has a public ledger where transactions must be recorded with proof that the transaction happened.

The magic of Bitcoin is that it creates value as it is used, and this created value supports the financial burden of the infrastructure/technology. One might even argue that bitcoin is approaching a nexus where the value created is not worth the burden ; and that this could cause the whole thing to collapse (like a pyramid scheme -- but I didn't say that)

What is very important to point out is that blockchain is PUBLIC, and PERSISTENT. Meaning we can't put sensitive information there. We can't put data there that needs to be corrected. Thus putting healthcare information onto the blockchain is just not going to happen. Sure we can encrypt it, but that doesn't use the blockchain.  What you put on the blockchain can't be revoked, it is persistently in the public view. So we also have to be very careful. Bitcoin isn't worried about this because these are exactly what it needs, it is a public journal of transactions and these need to exist forever.

So we can either figure out was to use the bitcoin system, where we primary focus on the monetary value; which is useful. Some have proposed ways that insurance, or at-least a trust-fund, could be used to pay for medical procedures. Including putting executable script into the blockchain that expresses when the money would be released.

I however think that the real challenge Grahame is putting forth is can we use the blockchain technology to build a uniquely Healthcare blockchain? For this we need to solve the fundimental funding problem, that is how do we financially support this blockchain?

I might suggest that a potential solution is as a journal of public pseudonyms linked to data access points (FHIR API) and authorization servers. The chain would assert (signature) the authenticity and pseudo-provenance of the data. While also enabling accessibility under he control of that data owner's control (UMA/OAuth).  The patient would initiate this, get their pseudonym, scrub their data as much as they want while still adhering to structure (FHIR profile) and integrity (hard to enforce) rules.

The important part about this is that it addresses the Identity problem; in that patient controls the identity. This can't start from the provider (although they can participate upstream). These identities are opaque, verifiable, and permanent. All the attributes that bitcoin leverage.  The patient can choose to be known, by linking their blockchain identifier to their Patient resource; or they can choose to publish a pseudo-Patient resource.

This leverages FHIR as the API; and UMA as the decision engine and source of disclosure rules... So everything that we are working toward in the standards is still needed.

Fraud is still a problem. Not in use of the data, as I covered that; but in publishing false data. This system doesn't address a malicious individual that invents healthcare data and publishes it for value. One individual could invent millions of data and pseudonyms; thus poisoning the actual big-data pool.   Solution might be that some set of authorities do strong identity proofing prior to issuing a pseudonym... so, someone other than the patient knows the true identity... ugg.

This is inspired by the "New Deal on Data"; an effort to build massive big-data while having sufficient rules around abuse.

My articles on  De-Identification, Anonymization, Pseudonymization

Monday, May 16, 2016

Start at Consent as a FHIR Resource

Last week I posted about the stalemate on Consent, Grahame challenged me to complete it by the end of the week. This week I put a proposal forward. I have taken the examples that have been presented to the HL7 CBCC committee, and created a Consent resource. I did take as much of the Contract resource as was needed by these examples, however I customized them specifically for Consent. This also means many elements are not needed.

I also simplified many elements to just those that our examples need. This does not mean that we won't need to bring back these elements, but rather that they are not needed by the examples.

This is the critical 'Agile' method that I was wanting to use, vs the method of building everything that might ever be needed by an infinite set of imagination. This Agile methodology is a bit more than is required by the FHIR Principles, but is very much a good methodology to assure the focus on implementations and the 80% rule is adhered to.

The important part is that we are hearing from those on the outside (you are all welcome to come inside) that what we have done is too hard to understand, too hard to use, and confusing.

This means that if someone thinks something is missing, they first must describe an example, possibly showing how it can't be encoded today and how they think the model should be improved. This will result in incremental improvement and advancement of the model.

Note I also renamed 'term' to 'except' as the way we are using it in Consent is to list the exceptions to the rule at the base of the consent. Thus it is not all the terms, just the exceptions. This works for both Positive and Negative Consent - Opt-IN with exceptions (exceptions are things not allowed); and OPT-OUT with exceptions (exceptions are things that are allowed).

So, I present the FIRST DRAFT (yes, I expect many improvement opportunities)


This is Consent Resource


Vs Contract Resource

6.7.3 General Model 

The following is the general model of Privacy Consent Directives.
There are context setting parameters:
  1. Who - The patient
  2. What - The topic - all or specific resources are listed
  3. Where - The domain and authority - what is the location boundary and authority boundary of this consent
  4. When - The issued and applies - When was this captured and over what timeframe does it apply
  5. How - The actions and actor - What actions are covered, what actors are covered. (such as purposes of use that are covered)
There are set of patterns.
  1. No consent: All settings need a policy for when no consent has been captured. Often this allows treatment only.;
  2. Opt-out: No sharing allowed for the specified domain, location, actions, and purposes;
  3. Opt-out with exceptions: No sharing allowed, with some exceptions where it is allowed. Example: Withhold Authorization for Treatment except for Emergency Treatment;
  4. Opt-in: Sharing for some purpose of use is authorized Sharing allowed for Treatment, Payment, and normal Operations; and
  5. Opt-in with restrictions: Sharing allowed, but the patient may make exceptions (See the Canadian examples).
For each of these patterns (positive or negative pattern), there can be exceptions. These exceptions are explicitly recorded in the except element.

6.7.4 Realm specifics 

6.7.4.1 US Realm sample Use-Cases 

Five categories of Privacy Consent Directives are described in the Office of the National Coordinator for Health Information (ONC) Consent Directives Document released March 31, 2010, and include the following US-specific “Core consent options” for electronic exchange:
  1. No consent: Health information of patients is automatically included—patients cannot opt out;
  2. Opt-out: Default is for health information of patients to be included automatically, but the patient can opt out completely;
  3. Opt-out with exceptions: Default is for health information of patients to be included, but the patient can opt out completely or allow only select data to be included;
  4. Opt-in: Default is that no patient health information is included; patients must actively express consent to be included, but if they do so then their information must be all in or all out; and
  5. Opt-in with restrictions: Default is that no patient health information is made available, but the patient may allow a subset of select data to be included.

6.7.4.2 Canada Realm sample Use-Cases 

The following scenarios are based on existing jurisdictional policy and are realized in existing systems in Canada. The default policy is one of implied consent for the provision of care, so these scenarios all deal with withdrawal or withholding consent for that purpose. In other jurisdictions, where an express consent model is used (Opt-In), these would examples would contain the phrase "consent to" rather than "withhold" or "withdraw" consent for.
  1. Withhold or withdraw consent for disclosure of records related to specific domain (e.g. DI, LAB, etc.)
  2. Withhold or withdraw consent for disclosure of a specific record (e.g. Lab Order/Result)
  3. Withhold or withdraw consent for disclosure to a specific provider organization
  4. Withhold or withdraw consent for disclosure to a specific provider agent (an individual within an organization)
  5. Withhold or withdraw consent for disclosure of records that were authored by a specific organization (or service delivery location).
  6. Combinations of the above

6.7.4.3 Non Treatment Use-Cases 

Also shown is an example where a Patient has authorized disclosure to a specific individual for purposes directed by the patient (possibly not a treatment case).

Friday, May 13, 2016

End-to-end FHIR testing

There is renewed discussion, much like back in January, around the need to go beyond testing just the FHIR Resource 'interoperability'. Testing Interoperability is not easy, and there are struggles with getting this first level testing done right. But this level testing is not complete enough to give confidence that an application, server, intermediary, analytics engine, or other are really ready to be used.

What we need is a higher level specification to focus on. I think the HL7 "Implementation Guide" could be this, but I am thinking something much higher than is normally documented by HL7. This because what is needed is not a "Standard", but a "Reference System". A 'system' in the broadest of definitions. A system as in a system of systems in a defined environment, and policy framework.

A reference system of systems:

I think it is possible to have a "reference system of systems' as a proof of completeness, that could be used during connectathons and certification.


This reference system needs to pick a minimum-useful set of FHIR resource centric workflows. The 'minimum' part of this is so as to have an end-to-end workflow, but also to not have the end-to-end workflow be the center of attention. 

This is more than just a selection of FHIR Resources. One can show that any one client can 'communicate' with itself through storing and retrieving a Resource on a FHIR Server. This does prove that there is connectivity, but not Interoperability.

Interoperability requires that one can not just communicate but also use the result. Thus it needs an end-to-end workflow where each actor has more than communication as a responsibility. Each actor must do something useful. At least the receivers of data must do something useful. The FHIR Server might just be an intelligent, and important carrier of data.

This reference system of systems needs to pick a specific 'setting'. The setting is the environment being simulated. Again it isn't to declare a best-practice, but simply a reasonable reference. There are an infinite set of settings that healthcare is ultimately working in. We need to pick one. 

What is important in this 'reference' system is to then define a set of reference policies. These policies are not held-up as best-practice, but rather 'one reasonable practice'.  These policies would include Privacy Policies, Security Policies, data retention policies, service level policies. Where policies include Privacy policies, Security policies, authentication policies, audit logging policies, audit reporting policies, data retention policies, service level agreement policies. Also identity policies, like what is a User, and their roles, and their relationships. What is a Patient, with what quality of demographics, cross-reference matching criteria, linking and unlinking responsibilities.

These are not best-practice policies, but rather simply realistic-policy that is representative of reality. Reality is that there is an infinite set of policies too, even an infinite set of realistic policies.

A more controversial aspect might be User Experience expectations. Expectations that might be broad, so as to define just usability goals. How fast can a brand new user understand how to use the system?

All the non-standard stuff

From this combination of a minimal-useful FHIR workflow, environmental setting, and a set of policies; you can then fill the middle with some specific configurations of services. So you define a single Authentication service, choosing one OAuth service with a set of enforcement policies. You choose a consent management system, from the many being defined (FHIR Consent, UMA consent, paper consent, etc). You define an expectation of what security events would be recorded in an AuditEvent, would likely even include actor that does auditEvent reporting. You define response-time expectations. 

A complete system... as a reference system... not as a 'best-practice'.

Doing the work

Back in January I proposed Document Sharing - ala MHD profile centric; which got shot down by HL7 leadership due to the fact I proposed it in the HSI workgroup. ... I failed, should have pursued a solution. HL7 Project Scope Statement 1231

Argonaught could do similar using their flavor of DAF. 

CommonWell could do this. 

Any organization that can take a two steps away from the standards definition could do this. It is not unlike what every HIE, Hospital, Clinic, and PHR must do. The difference is that you are explicitly defining it as a reference system; not as the ONLY system.

HEART can be used, but is, like FHIR, just one part of the system. HEART can't do it alone as they don't have the end-to-end workflow. They don't have the setting context. The HEART profiles would be something included in the reference system.

Focus on end-to-end system of systems

The hard part is that no-one will agree on these Setting and Policy choices. Yet, the specifics of the setting and policies are not important. What is important is setting someting reasonable so that the next stage can be set.

I hope I can participate...

Wednesday, May 11, 2016

FHIR Consent as a Resource or Profile

For the past year there has been a stalemate that I have tried to control. I think it is time for this stalemate to come to a conclusion. The topic is Patient Privacy Consent; the discussion is if this should be modeled as a core FHIR Resource, or as a core FHIR Profile upon the Contract Resource.

The owner of this discussion is the Community Based Collaborative Care (CBCC) workgroup. This workgroup has produced the CDA Consent Directive, and the original Privacy domain model. The Security workgroup is the one that has the infrastructure to decide and enforce Access Control. Thus the two workgroups work together on this topic. With the CBCC workgroup focusing on how to capture a Patient Privacy Consent, and the Security workgroup focusing on how to enforce this. I am co-chair of the security workgroup and an active member in CBCC.  There are other factors that I won't cover.

When we first started to model Privacy Consent Directive in FHIR, we had just finished (mostly finished) the CDA Privacy Consent Directive. So we had fresh knowledge of what was needed. As we started to do initial modeling, aka working with napkins, we came to a general consensus that what we would end up with would be much like a Contract. There were some that were insistent that this would be perfectly true, while others preferred to just do a Consent.

We took the path of those that were actively involved, vs those that were passive. A moment that I really would like to have changed. But this is exactly the "consensus" process... so any one complaining, simply needs to get actively involved, being more passive is not helpful. Standards are not built by the passive aggressive.

There was an attempt to address this in the Paris Workgroup Meeting; but CBCC didn't formally meet, and thus all the efforts of the community that went to Paris were ignored.

So now we have a Privacy Consent Directive -- Implementation Guide; which is a Profile of the Contract Resource. It is working.articles show that it does respond to the use-cases. It however is not as simple as it could be. This statement is not a statement that the Profile system doesn't work, but a Profile is a layer of complexity. Further because of this Profiling layer we end up with concepts in Contract that are not what would be thought of by those wanting to do a Consent.(Like Contract.topic, Contract.action, Contract.subject).
My blog

I vote for Agile:

I would like us to start over,  I would like a Privacy Consent Directive "Resource" to be defined. Break away from Contract. This does not mean that we loose all the good work we have done, but it does mean we start over.

Use Agile. I want this effort to use Agile, and NOT use a top-down approach. That is to focus on real-world use-cases, and build the Consent Resource only with what is necessary to build. There is no need to build complex layers, when simple layers will do.

This Agile approach does not mean we ignore good available standards. ISO/TS 17975 -- "Health informatics - Principles and data requirements for consent in the Collection, Use or Disclosure of personal health information"  is a fine foundation, along with the work HL7 has done on the CDA Consent Directive. I simply want these to be seen as foundational, and not seen as a demand for some preordained structure.

ISO/TS 17975 has a very simple abstract model of what a consent record should include
— identify the sender, recipient and subject of care,
— include the Purpose of Use or set of purposes which are permitted to be collected and used or disclosed,
— specify the activity permitted: Collection and Use and/or Disclosure,
— include the validity date range,
— be linked directly to the data to which it applies,
— persist with the data to which it applies, and
— be secured in order to preserve confidentiality, integrity, availability in order to provide proof of authenticity of the process and the consent record.
Most of that we get free with the FHIR RESTful model; and simple data elements.

The real work

What we don't get free is the part that we haven't even started to model. The way that the RULES would be encoded.  The above is just the part that identifies the broad meta; the WHO, WHAT, WHEN, WHERE, WHY. This does not address the HOW rules. How do I describe the kind of data I want protected in a kind of way from a kind of people...

More on this topic, when I cover what IHE has just finished - for Public Comment - on the Advanced Patient Privacy Consent (APPC) -- the next generation from Basic Patient Privacy Consent (BPPC).

Monday, May 9, 2016

Transition

I have been unfortunate to have been caught in a broad layoff -- Reduction In Force -- at GE Healthcare. GE Healthcare has been my home for just short of 18 years, and I have loved every minute of it. I have been part of many product, either directly on the team or through consulting with them on the use of Interoperability Standards, Privacy, and Security. These have been fantastically fun exercises in Systems Design.

I have the luxury of taking my time to find a new opportunity. Over the last two months I have spoken to some of you, and your excitement for my list of opportunities has been very gratifying. Over the next few months I will reach out to others, and welcome you reaching out to me.

I describe myself as a System Engineer, Principal Engineering Architect; but am most excited to help enable Privacy respecting Information Exchange. This might be between two healthcare practicing organizations, this might be centered on a Patient managed system, this might be for the purposes of Research. My passion wants to get data moving to where the stakeholder of that data wants it to go, and move it most efficiently, effectively, and accurately.  This means using Interoperability Standards, developing new standards, and working with leadership.

I know there are many opportunities for me as a Consultant, however I would like to find a vendor that has grown up over the past few years and now realizes that they need to take an active role in standards development, possibly a leadership role. I could build their program and fill many of the roles. I come with two leadership positions within HL7, and less formal leadership in IHE and DICOM. I am thankful of HL7 for having a nice transition program that allows me to maintain my membership and thus positions for a short time.

More details on me and my Resume can be found at my LinkedIn page https://www.linkedin.com/in/johnmoehrke

Wednesday, April 20, 2016

FHIR - Input Validation

Updated: Vadim Peretokin advises on the FHIR chat : You're better off in the world if you know about this stuff though. https://www.hacksplaining.com/exercises lists some XML-related vulnerabilities and is pretty easy to learn from.
It has happened again. This time Michael Lawley reported that the HAPI reference implementation was susceptible to XXE attack -- Grahame's email to the FHIR list:
Yesterday, Michael Lawley reported that the HAPI reference implementation had a security flaw in that it was susceptible to the XXE attack. Those of you interested in details about XXE can see here: https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Processing
The various XML parsers in the the various reference implementations are variably affected by this; we are releasing patches for them now.

Specifically, with regard to the java reference implementation, it has always ignored DTD definitions, so is immune. Any newly released versions will change to stop ignored DTD definitions, and report an error.

The current validator is susceptible to the attack; I am still investigating older versions, and will advise. Once I've done that, I'll check the pascal reference implementation.

Other reference implementers can advise with regard to HAPI, the DotNet reference implementation, and the various other RIs (swift, javascript, python...)
Note that this is an XML issue - your parsers have to be correctly configured. So this is equally likely to be an issue for anyone processing CDA, and even anyone using v2.xml
With regard to the FHIR spec, since the standard recommended mitigation is to turn off DTD processing altogether, I've created a task that proposes making the appearance of DTDs in the instance illegal (#9842)

This issue is not unlike the embedded SQL Injection that Josh found two years ago (almost to the day). Which at the time I decided Josh needed recognition and gave him my Murky Research Award. After that we updated the FHIR specification with a section on being robust to narrative sections. We likely need to update this section to be more on Input Validation with SQL injection and now XXE as examples.

There has been some 'discussion' following this where people want to put out that this XXE example is further proof that XML is inferior to JSON. They should note that the embedded SQL injection problem exists for XML, JSON, or any other encoding format. There are sure to be JSON specific issues.

Input Validation

The solution to both of them is the same mantra from the CyberSecurity community – Input Validation. (Note this is the same answer that the Safety (e.g. FDA) will tell you). You must inspect any input you receive from elsewhere, no matter how much you trust them. This even applies to receiving data from your own systems components (e.g. reading an object from persistent storage, even in the case where you wrote it there). All CyberSecurity frameworks (e.g. NIST, OWASP, ISO 27000, Common Criteria, etc) have a specific section on Input Validation.

Input Validation is really nothing more than a specific side of Postel's Law – Be specific in what you send, liberal in what you accept. It is the liberal part of that that is the focus here. In order to be liberal, you should be thinking that you should expect wide variation in what the other guy is going to send you. Including simple garbage, and carefully crafted malicious attack. Both are possible, and although Halon's razor would have you attribute the bad input to stupidity; it still must be defended against.

Input Validation means you need to do some extra homework. Much of it is already done by FHIR specification, but further 'profiling' is often needed. Where FHIR Profiling is defined, it is just a s valuable for Input Validation as it is for use-case clarification.  But FHIR based Profiling is not enough. It doesn't cover things like
1. String Length boundaries
2. String character encoding restrictions
3. Permitted characters vs not permitted characters.
4. element range expectations

What you want is to understand well what the data SHOULD be. An approach that looks only for BAD data, will be fragile. There is an infinite set of bad data. So any approach that specifically codes to detect bad data will only be good until tomorrow when some hacker has identified a new kind of bad data.

The Input Validation sub-system often can't reject a transaction, but it can neutralize data that is not good. It can eliminate that data, it can translate the characters, it can encapsulate them, it can tag the bad data, etc.

The main difference between XML and JSON; is that the tooling for XML is likely to be more generous. Such as the DTD problem. The default behavior of the XML tooling is to follow these, as the most likely beginning programming project likely wants that. However you must look carefully at your toolking for Input Validation – Robustness – settings.

Performance vs Robustness

Many will balk at the Input Validation need, saying that to do tight input validation – while being liberal – will cause their interface to be too slow. I agree, it is likely to do that. This is where a mature product will be intelligent. It will start out communications with a new sender in a very defensive mode, as it gains experience with that it can eliminate some of the Input Validation. Note that this is only possible when you have strong Authentication of the sender, so that you can be sure that it is indeed that sender sending you data, and that no entity can be injecting content. Never would all input validation be eliminated. You just always expect that the sending system could get compromised and thus start sending you garbage that it never sent before. Thus the really mature systems have a sliding scale of robustness, backed by historic pattern from that sender, and tested occasionally. Static rules are no better than never having Input Validation rules.

References to various Security Framework guidance – this is not new to the CyberSecurity community

Postscript from Rob Horn

Rob wrote this fine email at the same time I wrote mine. His perspective is very complementary so I asked if I could add it to my article. He agreed.
The problem is not XML per se. The problem is present for any approach that requires a public facing tool. XML is impenetrable without extensive tooling, so it is indirectly responsible. But any and all public facing tools are a risk.

We are not in the golden days of idyllic safety on the Internet.

Healthcare is under direct intelligent attack by malicious actors. All tools are under attack. There is no exception for "it's just educational", or "it's just for standards", or "there's nothing of value to steal". These are not pimply faced dweebs living in their parents basements. These are teams of organized and skilled experts, supported by large bodies of helpers. They include organized crime, hostile nations, etc.

It's good practice to treat all public facing tools with the same care that you give to the tools for patient access, operational use, etc. It's going to become necessary as the attack intensity escalates. We're in the business of providing this kind of product for our customers, so we should all have the skills and ability to maintain this level of protection and quality. If you can't do it, you shouldn't be in this industry. It's more work than we might like. But bad habits spread and the attackers are increasingly working to find twisty trails through secondary and tertiary access points. Penetrating HL7 and HL7 members is a great way to indirectly penetrate the rest of healthcare.

Most of the present active attacks are only described under non-disclosure. But, the publicly disclosed attack by Iran on an obscure little dam in New York state indicates the extent of attacks. This little dam was about as harmless as they get. You could blow it up and the worst that would happen is some wet basements. It didn't generate electricity. All it did was maintain the steady flow of a little river. So why did Iran take over the industrial control system for that dam?

My guess is a combination of practice for operators and intrusion normalization. As a practice target it was great. Nobody would notice the penetration. Nobody would get hurt. This is good for advanced training practice. Normalization is something that I worry about regularly for audit protections. A lot of current audit analysis looks for the abnormal. If penetration indications can be made normal then looking for the abnormal becomes less effective. Intelligent attackers know and understand the defensive methods and do take actions to make them less effective. The kid in a basement might not think this way. The professionals certainly do.

Kind Regards,
Robert Horn | Agfa HealthCare
Interoperability Architect | HE/Technology Office