Pages

Wednesday, December 28, 2016

Building more Software Architects

In close to 30 years as a professional engineer, I find that some people are natural software architects, while others expert software engineers struggle with architecture. There seems to be a characteristic of those people that can take a step back and 'architect'.  Is this learned? I suspect anything can be learned. If so, what is the critical catalyst that triggers and feeds that learning?

I seem to run into people given the role of software architect, when they are more a subject-matter-expert on a specific project.  On that project they are superior to all else, but they are not an architect. I seem to run into people who really want to become a software architect, but can't seem to hack it.

I also see some excellent Architects get pulled into Management where they waste away. Or worse they end up Program Managers, simply because they are the only ones that know all the moving parts. I am not saying everyone should strive to be a software architect, or that it is the pinnacle.

I ask, because I think there are far too few true architects today.  This more true as we enter the system-of-system-of-system world of Internet-of-Things (IoT). Being able to think short-term, long-term, horizontal-scale, depth-scale, modular, privacy, security, safety, reliability, continuous deployment, etc. All while being able to pivot when new information appears...

What are your top characteristics of a real software architect?
How did they get that way?

Friday, December 23, 2016

New Administration --> Fix Healthcare Problems

I am comforted to hear from many Healthcare leaders that their advice to the new USA administration is to continue the progress we have. Including continued support for Exchange, Direct, and CDA; while encouraging FHIR. I am VERY MUCH agreeing with these. Changing from these directions would kill much momentum and disrupt healthcare in a bad way.  There are others encouraging him to not kill Obama Care. I suspect he won't kill it but re-brand it. Or to create a single payer system.

There are some things in Healthcare that are broken in ways that are just nuts. Given that the new Trump administration is likely to be willing to do things that are against the norm for politics, I think we should recommend that these broken things be fixed. Because fixing them means radical change, and it appears that radical change is what we are in for over the next four years.

I will note that this was not my vote, and I am scared as hell. But it is a forgone conclusion, so we either stick our heads in the sand and hope our ass survives, or we do what we can to make the best of the situation.

My three things that are broken and need radical fix:

  1. Patient Identifier -- We need a national patient identifier. It won't be perfect, but it is badly needed. I have tried to make the point that this patient identifier can be opaque, and thus it can enhance Privacy. Today we share highly valuable demographics as that is the only way we can make a cross-reference. This is NUTS. Lets fix it. There are technologies today to allow us to have opaque identifiers while also assuring that the identifier can be validated. There are technologies today that would allow purpose-specific queries for cases where the patient didn't bring in their identifier but there is an health critical reason we need to look it up by demographics. There are technologies that can keep private the use of that identifier. Technology can scale today. This technology might be Block-Chain, but I don't think so due to the second need.
  2. Universal Privacy -- The patchwork of privacy regulations is getting in the way of progress. Declare that all humans have a right of Privacy. Define what that Right means. Be reasonable (right to be forgotten is not reasonable, useful but not reasonable). Override the patchwork of federal privacy, healthcare privacy, state privacy, etc. Privacy is not an option, or something someone can sell. Violations of these Privacy principles must result in punishment regardless of who or how the violation happened. ONE set of rules, even hard rules, will be easier to deal with than the patchwork. This will result in less privacy failure, and less privacy denial. THIS should not be specific to healthcare. ONE right of Privacy. Note it should not include in the regulation any technology specific requirement, as technology changes and thus the regulation will break.
      
  3. Incident Response Community -- Way too much something bad happens and knowledge of it is suppressed. I am not asking for public disclosure of everything. BUT the community should be enabled to learn lessons from others failures. This is true of at least Safety, Privacy, and Security.  There needs to be a way that authorized individuals representing every organization in healthcare can participate confidentially. That is they can expose a failure within their organization without adverse reaction (they must still meet regulated requirements). What I mean is that this is a peer group that will not use the information against their peers. What should happen is that their peers help diagnose what happened, come up with an action plan, and update the lessons-learned so that all the peers can implement that lesson. The result is a community that only gets stronger. This does NOT inhibit competition, as competition should be on health and experience outcomes.  This does happen in some circles, but needs government endorsement an encouragement.
I am sure there are others. I just don't have knowledge of them. I suspect there is HUGE gains to be made in supply-chain, payment-chain, and malpractice. These are broad areas that seem to me to be sucking far more money out of the system than they are providing value to the system. 

FHIR is not the solution to any of these broken things... but FHIR will be part of the healthcare solution.

Monday, December 12, 2016

IHE IT Infrastructure - 2017 work items

The IT Infrastructure workgroup has selected their work items for next year. It consists of 4 new work items, only one of which is a brand new concept. That is, the other three are re-casting of old use-case needs into a http RESTful world. There is only one of these new work items that is not FHIR based.

  1. Healthcare Provider Directory -- IHE has two standards: Care Services Discovery (CSD), which has been adopted in several countries as a way to manage health worker and health facility data and Healthcare Provider Directory (HPD) which has limited adoption. CSD and HPD are SOAP-based web services and are not compatible with systems deploying RESTful clients and servers
  2. Patient-Centric Data-Element Location Services -- This is a profile of profiles, addressing the use-case need for a element level perspective (i.e. FHIR) of the data held within Documents in a Document Sharing infrastructure (i.e. XDS). This profile of profiles will show how to bring various profiles together to add an additional layer of Provenance. Orchestrating: XDS, MHD, PDQm, QEDm, and various Document Content Profiles.
  3. Sharing platform for non-patient documents -- Support for documents like configuration-files, style-sheets, templates, instructions, etc. These have some metadata needs, driven by search use-cases, but will not contain patient specific information. 
  4. Remove Documents from XDS Repository -- Today the Metadata Update supplement has a method for removing a DocumentEntry, but that leaves disconnected the Document in the Repository. This work item will address all Remove use-cases, including the metadata and the document. 
In addition to these the committee also recognizes significant work needs to be done to 
  • Upgrade existing FHIR profiles to STU3. This work likely won't happen until late in the cycle as STU3 seems delayed. Most of these changes (MHD, PDQm, ATNA) will be mostly administrative changes. The changes to mACM, and PIXm might be simple update too, or might require significant consideration of best way to solve them given STU3 content. 
  • Maintenance task. The CP backlog is better than last year, but not much better. Therefore ITI will continue to focus on resolving this backlog. Meeting more often, weekly. Targeted meetings, so as to draw in the appropriate specialists.
I think ITI is maturing, with little net new big items. This could be because it is not being approached with the new work, but I suspect it is more a recognition that the existing infrastructure is supporting significant domain specific work.

Friday, December 9, 2016

War against TLS 1.0

I have gotten into multiple discussions on the topic of TLS 1.0. The result always seems to end up in no change of anyone position.

There are a few agreed to points:

  1. SSL is fully forbidden. 
  2. TLS 1.2 is best
  3. TLS 1.0 and 1.1 are not as good as 1.2
  4. Bad crypto algorithms must not be used (e.g. NULL, DES, MD5, etc)

However some people are taking a policy decision that TLS 1.2 is the ONLY protocol. They are allowed to make this policy change, as long as it doesn't impact others that can't support that policy

I have no problem with a war on SSL. I simply have a realist view on available implementations of TLS 1.2 on platforms where software is available to run. I would love for everyone to have the latest protocols, and for those protocols to be perfectly implemented. Reality sucks!

Standards Recommendation on TLS

What is expressly frustrating is that they point at standards as their justification. YET those standards explicitly allow use of TLS 1.1 and TLS 1.0 in a very specific and important practical case... that is wen the higher protocol is not available.

It is this last clause that seems to be escaping recognition.

The 'standard' being pointed at is IETF (the writers of the TLS protocol) RFC7525.  This isn't just an IETF specification, it is a "Best Current Practice" -- aka BCP195 -- May, 2015



Recommendations for Secure Use of Transport Layer Security (TLS) 
and Datagram Transport Layer Security (DTLS) 


Let me excerpt the important part of that standard from section 3.1.1: Bold and highlight added for emphasis. 


3.1.1 SSL/TLS Protocol Versions

It is important both to stop using old, less secure versions of SSL/ TLS and to start using modern, more secure versions; therefore, the following are the recommendations concerning TLS/SSL protocol versions: o Implementations MUST NOT negotiate SSL version 2. Rationale: Today, SSLv2 is considered insecure [RFC6176]. o Implementations MUST NOT negotiate SSL version 3. Rationale: SSLv3 [RFC6101] was an improvement over SSLv2 and plugged some significant security holes but did not support strong cipher suites. SSLv3 does not support TLS extensions, some of which (e.g., renegotiation_info [RFC5746]) are security-critical. In addition, with the emergence of the POODLE attack [POODLE], SSLv3 is now widely recognized as fundamentally insecure. See [DEP-SSLv3] for further details.

   o  Implementations SHOULD NOT negotiate TLS version 1.0 [RFC2246];
      the only exception is when no higher version is available in the
      negotiation.

      Rationale: TLS 1.0 (published in 1999) does not support many
      modern, strong cipher suites.  In addition, TLS 1.0 lacks a per-
      record Initialization Vector (IV) for CBC-based cipher suites and
      does not warn against common padding errors.

   o  Implementations SHOULD NOT negotiate TLS version 1.1 [RFC4346];
      the only exception is when no higher version is available in the
      negotiation.

      Rationale: TLS 1.1 (published in 2006) is a security improvement
      over TLS 1.0 but still does not support certain stronger cipher
      suites.

   o  Implementations MUST support TLS 1.2 [RFC5246] and MUST prefer to
      negotiate TLS version 1.2 over earlier versions of TLS.

      Rationale: Several stronger cipher suites are available only with
      TLS 1.2 (published in 2008).  In fact, the cipher suites
      recommended by this document (Section 4.2 below) are only
      available in TLS 1.2.

   This BCP applies to TLS 1.2 and also to earlier versions.  It is not
   safe for readers to assume that the recommendations in this BCP apply
   to any future version of TLS.

Note the last bullet tells you that you yourself must support TLS 1.2. A good thing if your platform allows it.

Financial industry PCI standard

Doesn't PCI require that organizations stop using TLS 1.0?
(Taken from Sequoia recommendation on TLS, as I a not a PCI expert)  As of 2016-11-23, the PCI issued the following text on their public web site at: https://blog.pcisecuritystandards.org/migrating-from-ssl-and-early-tls which states "The Payment Card Industry Security Standards Council (PCI SSC) is extending the migration completion date to 30 June 2018 for transitioning from SSL and TLS 1.0 to a secure version of TLS (currently v1.1 or higher). These dates provided by PCI SSC as of December 2015 supersede the original dates issued in both PCI Data Security Standard v3.1 (DSS 3.1) and in the //Migrating from SSL and early TLS// Information Supplement in April 2015."

Conclusion

Yes, it would be great if everyone had all the latest protocols, and that all those protocols were implemented without errors... BUT reality gets in our way. Especially so with Interoperability where reality is that we are trying to achieve Interoperability. 

UPDATE: Reader should note that RFC7525 is very readable and full of far more recommendations than just TLS version. Including detailed discussion of cypher suites, and authentication types, etc. There is no perfect solution or configuration. Security is RISK MANAGEMENT, and needs continuous care and monitoring by an expert.

Tuesday, December 6, 2016

User Account abandonment policy

I have changed employer. From GE Healthcare to By Light. Both companies offer their employees Health Insurance through United Healthcare

I 'figured' that I could continue to use my existing United Healthcare web login account to access both the old and new insurance account. Turns out this is not the way they do it. They want me to create a new web login for the new insurance account. I guess this is logical, and clean for them. It is inconvenient for me to have two accounts at the same web site, but it is also possible.

Chat session with UnitedHealthcare
Paula B. has entered the session.
JOHN MOEHRKE: Hi Paula.
Paula B.: Thank you for being a loyal member with UnitedHealthcare. How can I help you today?
Paula B.: How are you?
JOHN MOEHRKE: I have changed employer
JOHN MOEHRKE: My new Employer also uses UHC
JOHN MOEHRKE: so, how do I get my web login to recognize this new account?
Paula B.: I understand. For the website to recognize your new account through your new employer, you will need to re-register with your new account information.
JOHN MOEHRKE: so... I need to create a new login user? Or is there some process I can use to use the current login?
Paula B.: No, I'm sorry, you cannot use the old information. If I am not mistaken, it will continue to be associated with the old account.
JOHN MOEHRKE: okay. so how do I close the old login account? Meaning, how do I prevent it from ever being used again?
Paula B.: Once you create your new and everything has been update throughout all the databases that old account will no longer be active.

What I was worried about is that after I stop using my old login, their is risk that the account is not monitored and thus possible to be attacked. The attack would need to avoid the normal detection on accounts. But as we have seen this week with Credit-Cards; a smart attacker figures out was to avoid detection. In the case of Credit-Cards, they used many storefronts to try various codes. In the case of a user login, they might simply try a small number (1-3) attempts each day, presuming the detection resets each day. Given that I would not be logging in occasionally, as I have abandoned the account, the attacker has years and years to try.

The good news is that United Healthcare has a policy that covers this. They know that the account is explored. Their login shows me this. They allow me to login for 18 months, so that I can get to old information. Often times this old information might be needed for TAX purposes. So, 18 months is reasonable. After 18 months they totally disable the account. I tried to get details on just what this means, but given the responses I did get up to this point gives me some comfort that they did this right.

Paula B.: Once you create your new and everything has been update throughout all the databases that old account will no longer be active.
JOHN MOEHRKE: when you say... no longer be active... does that mean that it would be impossible to log-in to it? Sorry to be specific, I am a Privacy/Security expert, and don't like abandoned accounts that have healthcare information within them. If I stop using it, I can't tell if an attacker is trying to break in.
Paula B.: I understand.
Paula B.: You will have access to it for up to 18 months. After that point, the information will not longer accessible on myuhc.com.
JOHN MOEHRKE: okay, so that is a specific policy? I like that answer. It gives the user (me in this case) a chance to get old information I might need... while having a specific deadline. Thanks.
JOHN MOEHRKE: can you point to where that policy statement is written? (I trust, but... as I said, I am a Privacy/Security expert... so I like to verify)
Paula B.: You're welcome! I understand, but I am unable to point to where that is written. That is a UnitedHealthcare standard.
JOHN MOEHRKE: okay. thanks
Paula B.: If there is nothing else, thank you chatting today. I hope you have a great day!
Wish I had a policy fragment to point at... I guess I should set a reminder to try in 18 months...

Saturday, December 3, 2016

IHE: Analysis of Optimal De-Identification Algorithms for Family Planning Data Elements

This is a use of the IHE published De-Identification Handbook against a use-case. The conclusion we came to is an important lesson, that sometimes the use-case needs can't be met with de-identification to a a level of 'public access'.  That is that the 'needs' of the 'use-case' required so much data to be left in the resulting-dataset, that the resulting-dataset could not be considered publicly accessible. This conclusion was not much of a problem in this case as the resulting-dataset was not anticipated to be publicly accessible.

The de-identification recommended was still useful as it did reduce risk, just not fully. That is that the data was rather close to fully de-identified; just not quite. The reduced risk is still helpful.

Alternative use-case segmentation could have been done. That is we could have created two sets of use-cases, that each targeted different elements while also not enabling linking between the two resulting-datasets. However this was seen as too hard to manage, vs the additional risk reduction.

Further articles on De-Identification


        

IHE IT Infrastructure White Paper Published

The IHE IT Infrastructure Technical Committee has published the following white paper as of December 2, 2016:
  • Analysis of Optimal De-Identification Algorithms for Family Planning Data Elements  
The document is available for download at http://ihe.net/Technical_Frameworks. Comments on all documents are invited at any time and can be submitted at ITI Public Comments.

Wednesday, November 30, 2016

Is IUA (OAuth) useful in Service-to-Service http REST (#FHIR)?

My last article was regarding if XUA (SAML) was useful in a Service-to-Service SOAP exchange. The same question came to me regarding FHIR and http REST. It was not as well described, as it was in a phone call. But essentially the situation is very similar. There are two trading partners that have an agreement (Trust Framework) that one will be asking questions using FHIR http REST interfaces of the other party.

Using Mutual-Authenticated TLS

The initial solution they were thinking of was to simply use Mutually-Authenticated TLS in place of the normal (Server Authenticated) https.

This is easy to specify, and is consistent with IHE-ATNA.

This solves authentication of the server to the client, and authentication of the client to the server.

This solves the encryption and data integrity (authenticity) problem.

Thus keeping EVERYONE else on the internet out of the conversation.

The negative of this is that one must manage Certificates. One issued to the Client, One issued to the Server. The more clients and servers you have, the harder the management of these Certificates become. As this number approaches a large number (greater than 2 by some peoples math, greater than 20 by others) it becomes more manageable to involve a Certificate Authority. You can use a Certificate Authority from the start, but it is not that critical. Some Operational-Security-Geeks will demand it, but often they are misguided..

So at this point we have a simple solution, that addresses the issues.  It looks very good on paper.

Problem with Client authenticated TLS

There is however an practical problem that might cause you pain. It caused me pain as soon as the project I was on tried to implement this at scale. At scale, one tends to use hardware assistance with the TLS protocol. There are many solutions, in my example it was F5 based load balancing hardware based TLS support. These fantastic device make TLS --- FAST --- But their default configuration doesn't include Mutual-Authenticated TLS. They have Mutual-Authenticated TLS, it can be configured.

The next problem is that the TLS acceleration box strips off TLS; meaning that my web-server gets no indication of the identity of the client. If the situation is that I don't have different Access Control rules, this might not be a big problem. However I don't have a way to record in the Audit Log who the client is. If the client is exactly ONE system, then I can guess that it is that system.

The good news is that the TLS acceleration box can likely be configured to pass along that client identity from that TLS client authentication. In my case, there was a chapter in the F5 documentation that told me how to write the script to be inserted in the F5 so that it would extract the Client Identity, and stuff it into a http header of my choosing. Thus my web server could look at that header for the identity of the client. Of course I had to make sure that the header NEVER came from the external world, or it wouldn't be an indication of a authenticated client.  This is a kludge, but a defendable one.

Using OAuth is better?

So, using OAuth for client authentication, while just using normal https (server only authenticated TLS) is far easier to configure on the server. In fact it is supported by the default cloud stacks.

The advantage of this is that the OAuth token gets forwarded directly to your web stack, where it can be used for Access Control and Audit Logging.  It can be verified, based on OAuth protocol (and all the security considerations left open by OAuth). Really nice for both service orchestration and 'scale' is that it can be further forwarded to sub-services which can validate it, use it for access control, and audit logging.  This is really important feature if you have a nice modular service oriented architecture.

The drawback of OAuth is that you must include an OAuth authority in the real-time conversation. Where as with Mutual-Authenticate-TLS, the certificate is issued once and good for a couple of years; the OAuth token is often only good for 24 hours, or 8 hours, or less. You could issue 2 year OAuth tokens, but that is unusual.

I need to note that in order to use OAuth you do need to deal with registering the client 'application'; which is often done via a static secret (password like), or via a certificate... Everything ultimately does come down to a secret or certificate. I would recommend certificate, but realize that is not the common solution. Most implement only secret key.

Both is better?

Not really. The only benefit you get by using both Mutually-Authenticate-TLS and OAuth is that the hardware accelerated TLS box (F5) can reject bad clients with less overhead on your web-stack. This is a benefit, but you need to weigh this against the cost of certificate issuing and management.

It is however better in that it has the fewest hacks to get it to work fully.

Conclusion

As easy as http REST is, aka FHIR, it is very hard to get security right. Sorry, but the vast majority of http REST is either completely non-secured content, simple security. Such as Wiki, Blog, or Social networks. None of them are dealing with sensitive content that is multi-level sensitive. It is this that makes healthcare data so hard to secure and respect Privacy.



Friday, November 25, 2016

Is XUA useful in service-to-service?

I got an email question asking if the use of XUA is proper for situations of service-to-service communication.
I am not sure how far XUA really got in the IHE world, but we have an HIE in XYZ [sic] that seems to want to implement it on every IHE transaction, even those without a document consumer. Our role with them is strictly at a system level as a document provider and of course we are using Mutual Authentication
Reading the XUA spec it seems that IHE was gunning for consent authorization of a document consumer and those transactions, though it never actually came out and said "just" those transactions. SO my questions.
Does the IHE have a stance on this ? Are all transactions(XDS and PIX PDQ) to use SAML ? Or is the spirit of the law about consent and document consumption calls ?
How much is XUA used... very hard to know. But the concept of XUA is simply that a requesting party identify the requesting agent using SAML. Where that agent is usually a human in an interactive workflow. If we recognize that XUA is simply the use of SAML, no any specific subset, then I would say it is very universally used. In many cases, the server is ignoring it completely, in a few more it is doing nothing but recording in an audit log, but in a few it is being used in an Access Control decision. All of these are the vision of XUA.

XUA is also not tied to XDS, it is tied to SOAP transactions. These are mostly found in the XDS family (XCA, XDR, XDS), but also exist in some patient lookup transactions like (XCPD, PIXv3, and PDQv3). However there is no clear binding between SAML and HL7 v2 transactions like (PIX, and PDQ). It is not clear how one would identify the user in cases of HL7 v2. Note that you could identify a user in PDQm, and PIXm using IUA; but that is a different blog - Internet User Authorization: why and where.


XUA does include a number of optional attributes; specifically use-cases that when needed shall be satisfied a specified way; but the use-cases are not mandatory. There are indeed a few consent focused use-cases in this optional space. If the client needs to inform the server of a specific consent that authorizes access, THEN it is communicated thus. Other more commonly used use-cases are those around the name of the user, and the purpose of use for the request.

XUA is independent of consent, although many times consent is specific down to the user.

XUA is most often useful when the requesting party is a human, but that is not the only useful scenario.

However it is not unusual to use XUA to identify the service that is making a request, even if it is redundant to the TLS 'client' certificate. The SAML assertion is more expressive, and including it allows for future expansion to utilizing this more expressive capability.

On a practical perspective, it is common for TLS to be terminated at the very edge of a cloud infrastructure. It certainly authenticated the calling system. But being terminated in a TLS specific piece of equipment, that identity is not available for Access Control checks that will happen later. This kind of a configuration simply can't make access control decisions based on the TLS client identity (Or can't without some hacks in the service stack).

Conclusion

So I think it is reasonable that you are being asked to include a SAML assertion in all requests, even those that are automated and for which the only identity you can claim is the automated service itself. It is this analysis that does need to be done, what triggered the request. That agent that triggered the request is the one that needs to be identified. It is likely today to be a background task, not a human. Background tasks can be identified in SAML just as well as humans can.

Monday, November 21, 2016

Starting to blog again

Sorry to my audience for not getting much from my blog lately. The transition to working life again has been distracting me. I am very sick of forms. I realize that I benefit from the forms being online using browser from the comfort of my home. I can only imagine a few years ago when all of this training and forms would be in-person and on paper.

Some blog topics:

  1. IHE (ITI and possibly others) Plans for next year...
  2. Finish out my Privacy Consent topic with detailed breakdown of the abstract (done) into 
    • IHE-BPPC, 
    • IHE-APPC, 
    • HL7-CDA-Consent, 
    • HL7-FHIR-Consent, 
    • Kantara-Consent-Receipt, and 
    • OAuth and UMA
  3. IHE role in a FHIR world
  4. Adding sensitive data to a Health Information Exchange
  5. Something useful about Blockchain... 
  6. Something assertive about OAuth and FHIR


I often write an article based on some random question I got via email.. so please ask me random questions. You can try to use my blog "Ask Me A Question"

Tuesday, November 1, 2016

Starting my new chapter

I start my new job today. No office to go to, home is my office. I now work for a consulting organization "By Light Professional IT Services, Inc" that has a 4 year contract supporting and enhancing the Health Information Exchange capability of the VA healthcare (VHA) to the rest of healthcare. I am a Standards Architect, doing the same thing I did for GE, standards creation and use. Working for the government I have had a few dozen forms to fill out; get fingerprinted; then hours of training.

I will still be blogging about the standards developments, and implementation guidance on implementing those standards. I likely will be covering Privacy and Security less; heading more into transports and content.

Saturday, October 15, 2016

Tutorial on #FHIR #Security

Nice recorded tutorial by Pascal Pfiffner on FHIR Security from an application developer perspective. I understand from Rene that this was written based on my outline in the article FHIR Security and Privacy - tutorial outline

Some more details and emphasis...

Friday, September 23, 2016

Mobile Health Cloud vs Privacy Regulations

There is some strong discussion going on at HL7 around privacy concerns, especially now that HL7 FHIR has enabled easy application writing.  The discussion started with an article "Warning mHealth security fears are opening doors to app and device innovation" summarizing a study done by Ketchum.  There is concern that applications are being written by people that might not be as mature in the knowledge of how important Privacy is in healthcare.
  • There are concerns that new regulations will stifle innovation. I disagree...
  • There are recommendations that broader healthcare regulations are needed. I disagree...
  • There are concerns that identifiers for patients will be bad for Privacy. I disagree...
  • Some indicate that application developers don't care about privacy until a breach puts them in trouble. I disagree...
Let me explain my disagreement... I will also say that I agree with these concerns, just not in broad terms.

This problem of mobile-applications and Privacy is not unique to Healthcare. It is the scope of HL7, so understandable to be focused on it there. I point this out because from a Privacy and Security perspective we are far better off solving the problem together with all domains, than trying to solve it uniquely for healthcare. Healthcare does have some unique issues, like that the data can't be revoked or recalled.

The issue is somewhat unique to the USA, because of the extreme fragmented Privacy regulations. Although we do have HIPAA, GINA, 42-CFR Part 2, and many state augmentations. This patchwork of privacy regulations makes it very hard to understand the requirements, only very large organizations have the legal resources to untangle this all into one concept.

Privacy regulations are not important to instruct application writers on how to do the right thing.


Many application developers want to do the right thing so they gain access to Privacy-by-Design, and other Privacy Principles. These application developers design Privacy into their application, and thus Privacy does not get in the way.

Privacy regulations are important to deal with the application developers that don't try to honor Privacy; or those that actively thwart Privacy. Regulations are needed so that bad behavior can be detected, and prosecuted. Don't focus on Regulations to drive the right thing, look to them to prevent the wrong thing. In a perfect world there is no reason for regulations. A perfect world is where everyone wants to do the right thing for their peers, and have full resources to figure out what that right thing is. We don't have a perfect world... yet.

Mobile applications and the cloud are not limited by physical boarders, so they really need to look at the world. The problem that we have in the USA, is the same problem at a global scale. There is a huge patchwork of privacy regulations globally. The solution is the same, put Privacy first. Use Privacy-by-Design and other Privacy Principles. Make your application the best Privacy supporting application, and it will work everywhere (everywhere that governments themselves don't thwart privacy principles)

Build Privacy in from the beginning and it is not hard to do nor will it take away from a good user experience. Hack it on later and it is surely going to be problematic.  Apple is a good example of building Privacy in by design, and they have few (not zero) issues. Where as Facebook is a good example of hacking privacy on later, although they pushed through the hard part and are much better now.

The CBCC workgroup in HL7 is trying to do their part, they are creating a Privacy Handbook that all HL7 workgroups can use when they create new standards to assure that any Privacy Considerations are handled either in the standard they are creating or explained to the reader of that standard. This same thing is done by W3C, IETF, and OASIS; so we are solving the problem together with those domains.

If you can't protect the data, then don't collect the data.

Other Privacy topics covered on these articles.

Thursday, September 8, 2016

HL7 ballot:Guidance on Standards Privacy Impact Assessment

The CBCC workgroup has published a 'handbook' for comment in the current HL7 ballot. This handbook is to be used by the workgroups within HL7 for the purposes of producing HL7 standards that have 'considered' privacy. The expectation is that when a standard has considered privacy, it will be more easy to assure privacy when it is implemented.

Fortunately this is a first draft, and a draft for comment... so one hopes that major changes can be done.

I have voted negative with a three dozen comments, mostly negative. The problem this handbook has is that it is asking an HL7 workgroup, while they are writing an interoperability standard, to do a Privacy Impact Assessment, using Privacy by Design. These are great tools, but are tools that are focused on an operational environment. Trying to apply them to the design of a HL7 interoperability standard is impossible, or at best too difficult.

Which should have been obvious to the authors of this HL7 SPIA, given that the conclusion of each of 10 steps is to write into the target specification the same boiler plate text to follow regulations. This should have made it very clear that they were using the wrong tool for the job.

I recommended from the start of this project, and my negative comments reflect this, that HL7 follow the lead of IETF and W3C. They have an approach that supports PIA and PbD; but is cast into actions that an interoperability standard developing workgroup can properly execute. They use terminology that is understandable, or well defined. They have reasonable steps, and reasonable activities.  

HL7 is a standards organization, we expect the standards we produce to be used. We expect that the healthcare domain will not ignore HL7 and invent their own solution. Thus as a standards organization, we should look to other standards organizations that have already created standards that are applicable, and USE THOSE STANDARDS. Why are we re-inventing what IETF and W3C have already produced?  I think it is fully appropriate that we cast their text into terms that the HL7 community uses, however even that gap is narrowing with FHIR.

Reference:

Tuesday, September 6, 2016

Looking for career opportunity

Update: Found a new Career, started November 2016

As some of you know, I am currently exploring new career opportunities. Who best to reach out to than those who understand and are interested in what I do through following my blog. Topics such as Privacy Consent, Access Control, Audit Control, Accounting of Disclosures, Identity, Authorization, Authentication, Encryption, Digital Signatures, Transport/Media Security, De-Identification, Pseudonymization, and Anonymization..In the spirit of good networking I'd like to share my thoughts and objectives for my next adventure. Any thoughts, feedback, suggestions, or contacts would be greatly appreciated.

I seek to be considered for an Interop Architect, Interop Program Manager, Standards Developer, Privacy Architect, or other similar leadership position that allows me to continue to engage with International Standards development while directing one or more teams in the implementation of those standards. My philosophy is that Interoperability Standards are not a destination, they are a catalyst that enables something far greater to happen. Privacy is not a encumbrance, but an enabler of something far greater.

I have over 30 years of experience with IT communications, including 18 years of expertise in Healthcare Interoperability Standards and the application of Privacy. I have worked closely with product development teams working on small medical devices, big medical devices, health information systems, and cloud workflows combining all.

I am especially excited about the latest standard from HL7 - FHIR. The FHIR standard leverages modern platforms and interaction models. It models the healthcare data-model using XML or JSON; and interaction-model using http REST.

I currently hold a co-chair position in HL7 security workgroup, as well as a leadership position in HL7 FHIR Management Governance. I am recognized as a leader on the topics of Privacy, Security, and Interoperability in DICOM, IHE, and HL7.  I wish to continue with my engagement with HL7, IHE, and DICOM standards organizations. Interoperability standards allow for the best re-use of technical implementations. These standards set the basis upon which we will add-value.

I have significant experience interacting with government bodies to help them with the evaluation of Interoperability Standards, and the writing of regulations to improve healthcare. I was a member of the HIT Standards - Privacy and Security workgroup, Direct, HITSP, and CCHIT before that. I have influenced USA regulations such as HIPAA, and Meaningful Use; as well as regional regulations globally. I am a member of the Wisconsin HIE technical advisory committee, and provide technical advice to the USA national eHealth Exchange. I have advised HIE implementations in Saudi Arabia, Italy, France, EU, etc

I am a true believer that Privacy + Interoperability are not just equal to the sum of the parts; but enable something greater than could ever happen without them.  I openly and eagerly advise and encourage through 7 years of  blogging,

See my Resume/CV on LinkedIn https://www.linkedin.com/in/johnmoehrke

Comments, Suggestions, Recommendations are welcome. I don't expect my readers have job opportunities sitting there waiting. However I do expect that you might know someone who knows something is happening...

PS. It appears I am going to miss the September HL7 meeting in Baltimore. This is my second miss in a row due to not having an employer.  This is sad for me as I look forward to being able to interact with my peers face-to-face.

PPS. I am not a "Security Architect". I love the security architects, they do a hugely important service for Privacy. I just don't find the kind of focus on defense to be fun. I am far more interested in enabling the right use of data (Privacy), than trying to stop the mass of maleficence.

PPPS. Happy birthday to my blog... now 7 years old.

Monday, August 29, 2016

Blockchain and Smart-Contracts applied to Evidence Notebook


Moleskine notebookThere is a need where an individual or team needs to record chronological facts privately, and in the future make these facts public in a way that the public can prove the integrity and chronology.  Where the chronological facts need to be known to within some timeframe, typically within a day. Where the sequence of the facts needs to be provable. Where a missing recorded facts can be detected. Where an inserted fact can be detected. Where all facts can be verified as being whole and unchanged from the date recorded. Where all facts are attributable to an individual or team of authors.

Description


These proofs are used to resolve disputes and prevention of fraud. Areas like in intellectual property management, clinical research, or other places where knowing who and when in a retrospective way is important. Aka: Lab Notebook, Lab Journal, Lab Book, Patent Notebook. Here is an image from the Laboratory Notebook of Alexander Grahame Bell, 1876.,

File:AGBell Notebook.jpg

Historically, tamper-evident notebooks provided assurance of data provenance with clear chronology. Sewn bindings and numbered pages were the foundation which the user annotated with name & date inscriptions in indelible ink. While not infallible, the notebooks were good enough for many important evidentiary functions.

Blockchain technology can bring this historical practice into the digital age. In particular, blockchain can be used to allow for work to be conducted in private yet be revealed, either by choice or circumstance, at a future date.

There are three variations on the use case:

  1. Bob is doing research that may eventually be presented publicly. When it is presented publically there is a need to have historic evidence of all the steps and data used. This is today done with a tamper-evident notebook. The authors of these notebooks are also careful to include date/time as they progressively record their work. In this way an inspection of the notebook can determine that it is whole, not modified, and thus a trust of the contents, when, and by whom.

  1. Prior to 2013, the US Patent and Trademark Office (USPTO) used First-To-Invent to determine priority. While the tamper-evident notebook was essential in that model, it is still valuable supporting evidence even after the switch to First-To-File. In particular, intellectual property disputes benefit from tamper-evident records.

  1. Publicly funded research (e.g. NIH, NSF, DARPA) increasingly mandate the release of underlying data at a future date. There is also a trend on the part of regulatory bodies for full data access, especially in light of concerns over negative results from clinical trials not being reported.

Narrative

The following are the various steps in the overall process.
  • As entries are added to an Evidence Notebook
    • The evidence is recorded in a private notebook, and an Author Signature is submitted to a purpose specific blockchain.
    • The Author may choose to also archive the evidence onto the blockchain.
    • Members of the community, as part of their support of that community, will counter-sign these Author Signature blocks
  • At some time in the future when the Evidence Notebook needs to be disclosed, the Author will declare to the community their identity
  • In support of a disclosure, any member of the community with access to the Evidence Notebook may validate the notebook.

Use-Case Keeping Records

Bob at some periodic point, or based on some procedural point, submits the new Evidence Notebook pages. This is done using a Digital Signature across the new evidence pages, creating an Author Signature. This Author Signature is then placed onto the Evidence Notebook Blockchain, signed by an identity in the control of Bob. This Author Signature does not expose the content of the evidence notebook, but can be used by someone, like Edna, who has access to the Evidence Notebook to prove that the pages submitted have not changed.

  • ? Is there a need to define the Author Signature other than to say it is an XML-Signature format, with signature from the blockchain rather than from PKI?   Advantage the blockchain gives is the identities, algorithm choice, and public ledger.

Use-Case Escrow of Notebook

Bob can optionally put onto the blockchain the updated evidence notebook pages or any evidence (e.g. data) in encrypted form, with a smart-contract holding the key in escrow until one or more terms come true to release the content. The smart-contract can assure that the keys are appropriately disclosed upon trigger events such as time-period, inactivity by Bob, or other typical contract  terms. This escrow also preserves the content across the blockchain redundancy.

  • ? Should the encrypted notebook pages be also cross-signed by the community? The signature would be of the encrypted blob, which would be proof that the encrypted blob appeared on the blockchain at that time.

There is no way to confirm that Bob has placed complete evidence into this encrypted evidence package without also having access to the evidence. Thus there still is the risk that Bob has done an incomplete job of preserving evidence.

Support Use-Case Counter-Signature

Peers from the community will counter-sign these Author Signatures. This blockchain signature by peers simply indicates that the Author Signature block was observed on the Evidence Notebook BLockchain at the stated time. Through multiple counter-signatures by peers, trust in the Author Signature veracity is confirmed.

Automated timestamp peers could also be used, that do nothing but apply a verifiable timestamp signature across any new Author Signatures. These are indistinguishable from Peers, except that Peer identities would also be submitting their own Author Signatures, expecting peer counter-signatures.

Peers are compelled to counter-sign as an act of community. Through these peer identities counter-signing Author Signatures, these peer identities gain more of their own peers to counter-sign any Author Signatures that identity might post. (You wash my back, I’ll wash yours). Thus, a new identity on the blockchain that has not yet counter-signed other’s Author Signatures would not find peers willing to sign that new identity’s Author Signatures.

Use-Case Public Knowledge

The system to this point does not require identities to be known. Neither Bob nor the Peer identities need be publically known. They are simply identities in the Evidence Notebook Blockchain. An identity owner is free to explicitly make their identity known.

Bob needs to make public claims backed by Evidence Notebook proven through Author Signatures by a specific blockchain identity or identities. That is what Bob needs to make proof public that Bob is the holder of the private key associated with one or more identities. Thus binding Bob’s identity with all historic uses of that identity.

Once Bob makes identities public knowledge, others can monitor new Author Signatures created by that identity. This may be seen as exposing activity, so might cause identities that have been made public to not be used for new Author Signatures. The public knowledge of an identity may be seen as beneficial, so the identity may be made public early.

Use-Case Verifying Records

Edna needs to confirm an Evidence Notebook content. Edna has been given access to the Evidence Notebook content. Edna knows the Evidence Notebook Blockchain Identity that is claiming to have made Author Signatures corroborating the specific pages from the Evidence Notebook. The Evidence Notebook may be in any electronic form, as long as the Digital Signature process is repeatable. This is often use of XML-Signature mechanism.

Edna verifies Author Signatures of each submission (page). Edna verifies counter-signatures to gain assurances that the Author Signature has not been tampered with, and occurred during the time indicated.

Edna may choose to discount specific identities that have been determined to be fraudulent, or where the control of that identity private key has been compromised. Edna may choose to discount identities that have not yet made themselves public, holding public identities higher. Noting that the movement of an identity from anonymous to public has value to the community as a whole.

Actors

(brought in whole list from here. Figured we should re-use actors if they fit.)

Actor
Role in the use case
#Bob
The person or entity that submits Author Signatures. They are assumed to be an investigator or worker in a research team.
#Edna
An authenticated and authorized individual that has been granted access to the Evidence Notebook. This may be a staff researcher for the Study Sponsor doing cross-study correlations, or an external researcher with a different study question that can be answered with previously collected data.
#Paul
A peer on the blockchain. The identity may be known or not known.
#Mal
Generic bad actor
Research Sponsor
The organisation that receives research data. These individuals or systems need access to the evidence. They may receive this evidence directly, or through the Escrow Evidence. For the purpose of diagrams and data flows, any member of the study team will be represented as "Dan"
Research Team
The individuals and systems who are performing some research or other activity for which an Evidence Notebook is necessary. Bob is a member of the research team. For the purpose of diagrams and data flows, any member of the research team will be represented as "Bob"
Peers
The individuals and systems who counter-sign Author Signatures to help provide veracity. It is expected that peers will not be part of the same research team as Bob.

Prerequisites / Assumptions

  • Bob needs to keep the research confidential until some future time.
  • The format of the notebook need not be constrained, as long as digital signature can be validated once the notebook is made public.
    • Presume use of XML-Signature schema can mediate this
    • If Evidence data is disclosed it must be properly handled or de-identified
  • There is no need to publish the content of the notebook on the blockchain.
    • There is an option for encrypted notebook on the blockchain, and use of smart-contracts to unlock as appropriate
  • Bob may have many notebooks, or may have many research projects interleaved within one notebook. This similar to paper notebooks today.
  • Bob may need to hide his current activities, meaning new activity can’t be associated with Bob

Use Case Diagrams


Use Case steps

  1. New Author Signature
    1. Bob updates his evidence notebook
    2. Bob submits a Author Signature block to the blockchain
    3. Bob optionally submits Evidence blobs to the blockchain
    4. Paul notices a new Author Signature block
    5. Paul counter-signs the Author Signature block
  2. Evidence Notebook validation
    1. Edna is asked to confirm an Evidence Notebook
    2. Edna is given access to the Evidence Notebook (may not be public disclosure)
    3. Edna validates signatures from the blockchain
    4. Edna validates counter-signatures from the blockchain
    5. Edna extracts timestamps from set of signatures
    6. Edna may validate Public Signatures as necessary
  3. Evidence disclosed
    1. Smart-Contract triggers
    2. Smart-Contract may include notification mechanisms to Dan
    3. Dan receives Evidence and decryption keys given trigger on Smart-Contract

Sequence Diagrams

(drafting, not yet done)

End State

The use case ends when Bob stops submitting Author Signatures under a given identity. There is no expectation that identities must be publically unknown, or can’t be used once publically known.

Success

  • Author Signatures are validated
  • Modified Author Signatures are detected as not valid
  • Participation sufficient to achieve (n) counter-signatures
  • Funding by organizations relying on output (research, clinical trials, etc)

Failure

  • Participants collusion to revise history
  • Is insufficient number of peers, and therefore insufficient number of prompt counter-signatures, a distinct failure mode?

References


Champion / Stakeholder

John Moehrke (self)
Scott Bolte (Niss Consulting)

Related Material


Common Accord: CommonAccord is an initiative to create global codes of legal transacting by codifying and automating legal documents, including contracts, permits, organisational documents, and consents. We anticipate that there will be codes for each jurisdiction, in each language. For international dealings and coordination, there will be at least one "global" code. Center for Collaborative Law

IP Handbook - “Inventors and Inventions” - Chapter 8: “How o Start-and Keep-a Laboratory Notebook: Policy and Practical Guidelines   http://www.iphandbook.org/handbook/ch08/p02/

MIT - Instructions for Using Your Laboratory Notebook http://web.mit.edu/me-ugoffice/communication/labnotebooks.pdf May, 2007

NIH - “Keeping a Lab Notebook” - Presentation by Philip Ryan, https://www.training.nih.gov/assets/Lab_Notebook_508_(new).pdf

FDA - Pharmaceutical Quality Control Labs - http://www.fda.gov/ICECI/Inspections/InspectionGuides/ucm074918.htm

Cornell - LabArchives - an electronic lab notebook - http://collabhelp.cit.cornell.edu/lab-archives/

Howard Kanare - Writing the Laboratory Notebook, American Chemical Society Publications, 1985,  ISBN 978-0841209336

Astroblocks - Lab Journal on Blockchain, experimental use of bitcoin chain, April, 2015, http://www.newsbtc.com/2015/04/11/astroblocks-lab-journal-on-blockchain/