Thursday, September 29, 2011

Securing RESTful services

What is meant by RESTful? Ok, that is an old one; given that there is no such standard as REST. My understanding, RESTful is simply the philosophy of using HTTP built in command set PUT/GET/POST/DELETE (aka Create, Read, Update, Delete – or by others RLUS - Read, Locate, and Update Service). Thus the transport, encoding and command set are fixed. The theory is that you as a programmer then just focus on the special encoding above this command set, for example what is your query parameters and what is the result to look like. Thus freeing you from worrying about transport or command set, and ... security.

As far as securing RESTful services; IHE-ATNA already says how to do that – Mutually Authenticated TLS. I talk at length about this in Securing mHealth - the role of IHE profiles, specifically about the operational reality of using ATNA. IHE ATNA takes care of many risks, and does provide system authentication. Sometimes knowing the requesting system, is enough to know that you can trust that system would only ask for information that it knows the user is authorized to get. Surely using ATNA the service can trust that the client will include the user and purpose in the audit message recorded at the client side, because ATNA requires security audit logging.

What I want to address is deeper than simple HTTPS -- or even full Mutually-Authenticated-TLS. I want to address user and patient based access controls to very sensitive health information. Today RESTful is used mostly to access non-sensitive information. It might be important information, or simply might be maps, earthquakes, weather, etc. Most uses of RESTful are not trying to access as sensitive of information as healthcare information, certainly not information that can have privacy policies (Consents) that rule so finely over the data. Many are asking for RESTful to be used to access fully identified clinical information, and some are even asking that it be used to create or change this clinical information - such as the IHE Profile Proposal for a RESTful interface to XDS. These are the issues that I am trying to figure out how to equally secure RESTful vs SOAP.

In SOAP we have well defined ways to communicate the security context. IHE profiled the use of SAML assertion (XUA): who the user is, what their roles are, what they intend to use the data for, and any authorizations they hold. I cover this in the Bloginar on XUA. With SOAP based web-services this all comes along in the security layer built into SOAP, the WS-Security layer. RESTful doesn't have this layer, or at least not this well defined.

As to providing user identity, there is some hope, but no clear hope. Yes there is a Kerberos for HTTP (documented in EUA). Kerberos has issues when being used beyond a constrained environment, so it is not as suitable for HIE use.

Yes you can use SAML over HTTP. This is not documented in IHE as it is not implemented consistently in toolkits --- but this is used today for browser interactions. For example inside of GE all user authentication uses SAML identities mostly through the SAML "Browser SSO Profile", thus making it easy to work with external parties such as travel reservations. This method doesn't work great for a system-to-system API.  The good news here is that the OASIS committee that handles SAML is working on this very problem now. 

Most RESTful people want to use OpenID now days; which is a good choice for last-mile API; It just doesn't support the necessary user context attributes (role, purpose of use, authentication type) that access to sensitive information really needs. For this the OpenID community adds OAuth, which is fast developing but not mature. OAuth 2.0 looks really good in this camp.

The worst choice for user identity is to use an inline HTML form. This might work for interacting with a human, but as a programming interface it is very hard to work with. This solution locks you into one method of authentication, and one centrally managed user database. Thus proliferating the post-it note problem.

I hope to uncover the 'right' way to specify a RESTful service API for accessing highly sensitive healthcare information. I am not sure I can provide as good a security layer as is provided today with SOAP, but I am hopeful and open to suggestion. IHE will try to figure out all the possibilities, and all the operational environments. Much of the documentation I find is specific to one platform or the other.

I suspect that we will use something like OpenID + OAuth on the RESTful side, use WS-Trust to convert these tokens in the proxy service, so that on the backend we can use SAML to interact with the XDS or XCA backbone. I think this is a reasonable solution. I do expect that a RESTful API will be deployed for a specific use. It might be for the use by a large healthcare organization, or by a PHR vendor, or by a HIE; but the point is that this is an API into XDS/XCA that is hosted for a very specific purpose. Thus the very specirfic purpose can scope the security context well enough to make it easy on the Browser side, while satisfying the needs on the backend.

We could ignore the problem, but then what would "App" developers do? Guess at what they need to have implemented? Even a bad single choice is better than no choice. Even if we tell the App developers to include HTTP Basic Authentication, we will be sure that they can at least do that. Thus only hoping that they have thought beyond the minimal necessary to be compliant with the profile.

Please help. Please provide your choice. Please provide your environmental problem. The more information we have at the start, the better choices can be made.

Monday, September 26, 2011

Document Encryption

IHE has a new supplement “Document Encryption (DEN)” out for Trial Implementation that explores all the possible ways that encryption could be applied to Documents. This supplement went to great extents to define a large set of different use-cases, each with their own concerns regarding protecting documents. As such it also explored all the existing profiles capabilities to meet these use-cases, and thus identified a few gaps.

The following is a table (Table Q-2. Use cases for existing and new IHE profiles with encryption) found in the supplement (reformatted slightly to fit in the blog). For details on each of the use-cases, please see the document. In this table each use-case is shown in a row, and each solution from IHE is available in a column. Where the profile is designed to directly address the use-case an “X” appears, where the solution partially supports the use-case a “(x)” appears.

Use case
Doc Enc
new XDM Media Enc optn
Email optn
PDI optn(CMS)
Point-to-point network exchange between machines


Network exchange between machines in different trust domains


Online exchange of documents where partially trusted intermediaries are necessary


Exchange of medical documents using person-to-person Email


Media data (DICOM) exchange between healthcare enterprises using physical media

Exchange health records using media

Media to media transfer

File clerk import

Unanticipated work-flows

Clinical trial

Multiple recipients of secure document

Sharing with receivers only partially known a priori, a group or a role

Partial encrypted XDM submission set

As such there are some use-cases that are not really fully satisfied by the existing profiles, so the supplement goes on to define how to (a) Encrypt an XDM media, and (b) Encrypt a Document alone independent of any transport.

As such it comes up with a nice table that explains when one of the IHE Profiled solutions is most useful. The following is Table Q-1, IHE Encryption Solution Overview

When to use?
(point-to-point using TLS)

·  environment uses networking transactions (e.g., XDS/XDR); or
·  to be protected data concerns (representation of) XDS/XDR transactions and packages; and
·  confidentiality need applies between internet hosts (point-to-point)
(end-to-end using WS-Security)
·  environment uses web-services (SOAP); and
·  to be protected data concerns (representation of) transactions and packages (e.g., XDS/XDR); and
·  (partial) confidentiality need applies to intermediaries between end-points (end-to-end); and
·  where encryption between hosts is not sufficient
IHE XDM Email Option
(using S/MIME)
·  environment uses XDM with exchanges based on Email (SMTP); and
·  to be protected data concerns (representation of) XDM media content; and
·  confidentiality need extends from the sender up to the final recipient’s Email system (end-to-end)
IHE Document Encryption
·  environment uses any means for data exchange, in particular non-XD* means; or
·  to be protected data concerns (representation of) arbitrary data (documents), in particular non-XD* packages; or
·  confidentiality need applies between arbitrary end-points (end-to-end), in particular where intermediaries or unanticipated workflows are involved
IHE XDM Media Encryption option
·  environment uses XDM; and
·  to be protected data concerns (representation of) XDM media content (content and metadata) on physical media; and
·  confidentiality need matches path from creator to receiver (importer) of media
IHE PDI privacy option
(using CMS)
·  environment uses PDI; and
·  to be protected data concerns (representation of) DICOM data on media; and
·  confidentiality need matches path from creator to receiver (importer) of media

Implementing the Document Encryption (DEN) profile should be very easy, as the profile is leveraging a commonly implemented standard. The standard used by the DEN profile is the same standard that the IETF profiled for use by e-Mail uses for S/MIME. The DEN profile clearly is not S/MIME, but rather a more general purpose use of this underlying standard. 

To help the implementer, there is a page on the IHE wiki that points to toolkits and implementation notes. On this page an implementer can find different solutions that they can simply leverage. There are examples of files that have been encrypted so that you can test that your system can decrypt them. There is very little need to implement the details when there are so many current implementations available. 

IHE expects that when others implement this profile, that they can use this information. As a wiki, the expectation is that as new information is discovered the community (that’s you) will update the page. Don’t wait for some ‘authority’ to fix something that is wrong on these wiki pages. Feel free to update them as necessary (common wiki behavior is expected).

Digital Identity for Medicare Beneficiaries

Last week a bipartisan coalition of lawmakers introduced legislation that would bring digital identities to Medicare patients. This is fantastic news, and a logical extension of the VA and DoD use of their Common Access Card (CAC). This will bring another very large chunk of the population supporting one standard for Digital Identity.  Those against a common identity for all patients will surely have something to say about this, but from what I have read it has broad support today.

A huge administrative burden in healthcare today, especially as we try to link all our data between all the places we have ever been treated, is the identity of the patient. Due to the very old forbiddance to fund any national patient identity, we continue to push all kinds of other demographics into ONE SYSTEM under the hope that system can link all the various identities into one. Sometimes it works, sometimes it fails. Failing to find all the places is one form of failure, usually resulting in some data not being used when it could have been. This failure is not very critical today since before HIE or NwHIN there was no sharing, so some linking is better than none.

The failure that results in linking more than your data is the one that causes worry, that is you might be treated differently than you should because data from someone else was reported as yours. If this other data is simply displayed to the clinician, they typically will catch the mistake. But we are trying to get Clinical Decision Support to automate the data analysis; thus there is little chance to detect this failure.

So I am excited that this is being done, and I like their approach

The legislation would require a two-step plan to develop and implement the program.

Under the first phase of the plan, the HHS secretary would set up a smartcard pilot program in specific regions to boost the quality of care and the accuracy of Medicare billing, and reduce the likelihood of identity theft and waste, fraud and abuse.

Under the second phase, officials would consider the viability of expanding the program and implementing the smartcard technology nationwide (KTVZ, 9/14). If successful, the legislation would authorize the distribution of these smartcards to all Medicare beneficiaries.

I know that some will be worried about the implications of a unified identity, but the alternative is clearly not safe or efficient.  As someone who worries about Privacy and Security, the current solution is very scary. It is a huge database of highly identifiable data, some of it valuable to financial fraud others to healthcare fraud. I don’t believe that these databases are not being secured, but we do know that risk is never zero. A huge database like we are forced to build, because we are forbidden to have a common healthcare identity, is a very interesting target to those that could benefit from it.

Having a standard healthcare identity card allows all healthcare treating facilities to focus on one system, one identity. Thus there is not wasted time and effort in designing all kinds of user interfaces and system interfaces to read various identity cards - something we expect humans to do. There would also be less wasted design and implementation time put into interfacing with HIE or NwHIN.
In all cases today this is simply overhead that adds cost to healthcare. There is very little benefit to having thousands of identities independently at thousands of facilities. Let’s identify the perceived benefit and figure out if there is a different way to satisfy that benefit.

Friday, September 23, 2011

ONC Call for Participation - Data Segmentation

This call went out publicly on Monday (see below) and I was already pre-signed up. I have been involved with this topic for many years. I am disappointed that the introduction to this topic treats it as a ‘persistent privacy issue’. Although it does seem to be a persistent privacy issue, the persistent aspect has more to do with unwillingness than a lack of ability to provide consumer controls. I will totally agree that the security community, including myself, don’t do enough to explain how to do this.

I have blogged on the topic in the past many times. At first the “Data Segmentation” phrase threw me off. This is a term that security community uses to describe something different. Data Segmentation is the process of carving out extremely sensitive information and keeping it physically isolated from common data. I thus wrote: Data Classification - a key vector enabling rich Security and Privacy controls. This did quite a bit of good to help the community understand “Data Classification”, and has resulted in a new understanding of ‘confidentialityCode’. Proposal for confidentialityCode vocabulary

This is only one of the problems that is included in the topic of “Data Segmentation”, embedded inside is the issue of how a patient can express their privacy policy or constraints; and how that can be enforced. To me this is a very different problem, and would never be seen by a security professional as related to “Data Segmentation”. This is not to say that Security doesn’t have a solution. It is just known as Privacy Policy. In fact the solution to this need has very little to do with the Data. I have tried to express multiple times that Privacy Policy otherwise known as Constraints are not ‘metadata’. Could an architecture consider them metadata, sure it could but it would result in a fragile and non-scalable solution. One Metadata Model - Many Deployment Architectures, Data Objects and the Policies that Control them, ConfidentialityCode can't carry Obligations

Capturing the Privacy Policy (aka Consent, Obligations, Constraints), how to encode it, communicate it, and manage it in a way that holds up to legal challenges. We have some Stepping stones for Privacy Consent, while still developing advanced consents.  IHE - Privacy and Security Profiles - Basic Patient Privacy Consents, Consent Management using HITSP TP30, The meaning of Opt-Out, Opt-In, Opt-Out.... Don't publish THAT!, Consent standards are not just for consent, Consumer Preferences and the Consumer, and RHIO: 100,000 Give Consent.

We even have some actual work done by some Health Information Exchanges - Draft Affinity Domain Policies

I look forward to helping the community understand. I commit to doing a better job of explaining how this is done in a standards based way that is robust, can be implemented in multiple deployment architectures, and is scalable.

From: S&I Framework Admin []
Sent: Monday, September 19, 2011 12:52 PM
To: undisclosed-recipients
Subject: Call for Participation - Data Segmentation

Call for Participation
Office of the National Coordinator for Health Information Technology (ONC)
Data Segmentation Initiative

The Office of the National Coordinator for Health Information Technology (ONC) Offices of the Chief Privacy Officer and Standards and Interoperability are launching an initiative to address standards for the ability to exchange parts of a medical record (often called data segmentation).

As announced by ONC in a recent post to the Health IT Buzz Blog:
“This project aims to make progress on the persistent privacy issues raised in the PCAST report. The goal of this project is to enable the implementation and management of health information disclosure policies originating from a patient’s request, statutory and regulatory authority or organizational disclosure requirements.

The project aims to examine and evaluate the standards needed for sharing individually identifiable health information (including standards recommended by the Health IT Standards Committee through the use of metadata tagging of privacy attributes in standard clinical and policy records and record segments). The initiative will develop use cases that define the current need for data protection services, such as a patient’s directive not to disclose substance abuse records in accordance with 42 CFR Part 2, and will then extend current standards-based software models to demonstrate interoperability. Testing will be based on a reference model aligned with a set of use cases and functional requirements developed by the S&I community.”
Dr. Farzad Mostashari, National Coordinator for Health Information Technology

On behalf of ONC, I am pleased to announce and invite your valued participation in the launch of the Data Segmentation Initiative. This initiative takes place under the auspices of the ONC Standards & Interoperability Framework in conjunction with the Office of the Chief Privacy Officer.

Meeting logistics and reference materials are posted on the ONC Data Segmentation wiki page:

If you would like to volunteer to participate in the Data Segmentation Initiative, please review the documentation on the S&I Framework wiki:  Once you have read through the material regarding participation, levels of commitment, guidelines for participation and voting rights,  please sign up to participate in the Data Segmentation Initiative by completing the registration form at:

The official launch of the Data Segmentation Initiative will be held on October 5th from 1:00 – 2:30 pm EST with opening remarks from myself and Dr. Doug Fridsma (Director of the Office of Standards and Interoperability), a brief overview by Johnathan Coleman (Data Segmentation Initiative Coordinator), and a presentation by Melissa M. Goldstein, JD on the Whitepaper released by ONC in September 2010 entitled, “Data Segmentation in Electronic Health Information Exchange: Policy Considerations and Analysis,” available at:  Details on the Data Segmentation launch including web meeting access and call in information are posted on the wiki:  .  In addition, we will host Data Segmentation breakout sessions at the S&I Framework Face-to-Face Meeting, October 18-19, 2011 at the Hyatt Regency-Crystal City in Arlington, Virginia.

Your perspectives, expertise and experiences are critical to the success of this initiative and we look forward to your participation on the Data Segmentation Initiative.
Joy Pritts, JD
Chief Privacy Officer
Office of the National Coordinator for Health IT

Securing mHealth - the role of IHE profiles

I was notified and volun-told by Keith a few hours before he submitted his new Profile Proposal "IHE XDS for mHealth access to HIE". He and I have been bumped around constantly by mostly invisible forces pushing for both RESTful interfaces and more simple ways to access XDS (and XCA). So this profile proposal is born out of that abuse. Therefore it is really not fully developed yet, although it is more fully developed than any of the other profile proposals put before the IHE ITI committee this week.

Securing mHealth is not easy, there are so many risks that arise when one goes onto an inherently portable device. So, very quickly there are questions about how IHE will specify the "Security Considerations" of this profile that Keith presents.

Of course I will first point very quickly to the IHE process for determining exactly this, Cookbook for Security Considerations. This process has been discussed on my blog multiple times - There is No Security Pixie Dust - it is Risk based and thus very powerful, flexible, and scale-able. This process gets the profile writers to think through security/privacy risks and place reasonable requirements into the profile. The result is typically a few recommendations to include 'capabilities' such as the IHE ATNA profile.

When I mentioned this to Keith, he quickly pushed back and asked why simply HTTPS couldn't be used. I pointed out that HTTPS is HTTP over TLS, just like IHE ATNA says. Turns out he was worried about the "Mutual Authentication" aspect of IHE ATNA. It was at this point that I understood that Keith had fallen into a trap that many people fall into, and to see someone like Keith do it is to recognize that it is a huge and inviting trap.

When the IHE ATNA profile indicates that a product must be capable of using Mutually-Authenticated-TLS, we are indicting a capability. We are not indicating an operational reality. So, if a hospital chooses to use an application that uses the new "mHealth access to XDS" profile; they will do a risk assessment of their operational environment and they will determine what parts of IHE ATNA they want to use, and what parts they don't want to use. So if they determine that they have good control over their mobile devices (how, I don't know, but lets imagine for now), then they can choose to not to use the client side authentication portion of IHE ATNA. It is their choice, and this choice is enabled by the capability built into the IHE ATNA profile that is mandated be grouped with the Profile. The alternative is that the new profile doesn't say anything about security considerations, and the application developer doesn't implement any security, then the operational environment has no capabilities to attempt to use.

We could hope that the application developer thinks about security, but then what security do they include that they are sure the service will also include? Lets say that the application developer chooses to use S-HTTP, while the service provider chooses HTTPS; this results in an interoperability FAIL (Yes, I know this is contrived, but I expect you can see the point where there are so many possible miss-matched choices that can be made). This is the exact reason why IHE exists, to scope interoperability problems into a single interoperable solution - profile.  This is exactly what IHE ATNA is doing for secure communications. This does not mean that S-HTTP is a bad idea, and there is nothing wrong with using S-HTTP. It simply means that no claims can be made about how that solution complies with IHE.

Another example is the Security Audit Logging that is part of IHE ATNA. Again, this is a capability not an operational mandate. If the local environment doesn't have an Audit Record Repository, then the capability is turned off. Hopefully it is simply redirected to a local file with appropriate filesize management.

All of this is simply Risk Assessment "flow down". Meaning that at each level of design one does a Risk Assessment, and manage the risks as best as they can at that level of design. Documenting your environmental assumptions, security capabilities, and residual risk. The next level of design will take prior design outputs as inputs to a Risk Assessment. This next level of design Risk Assessment will do what it can to address the risks, using the controls available to it. Eventually one gets to the operational environment where again the Risk Assessment takes as inputs all the outputs of prior Risk Assessments. This operational environment will use all the security capabilities available, possibly buying more controls, writing procedures, building walls, and eventually covering residual risk by Insurance. This is very much the model outlined by IEC 80001.

a) Profiles need to consider security, and make recommendations on security profiles to use and any unresolved security risks critical to be addressed in the product design.
b) Profiles are there to assure interoperability, this is true about the security profiles as well. The security profiles are there to assure that the security choices on either side of a transaction are interoperable.
c) The security profiles are not there to handle security completely, just interoperability. This is why user-identity assertions are included, but not the user interface used nor the access controls. The application design and operational environment deal with user interface and access controls in a functional way.
d) Profiles specify capabilities, not mandate that they are used in an operational environment. So IHE ATNA specifies that a product claiming IHE ATNA supports Mutually-Authenticated-TLS for all network traffic communicating protected data, but in an operational environment this might be used on no connections, some connections, or even downgraded to just server side authentication.
e) Risk Assessment at many levels assures that each level of design has done what it can to enable a secure operational use. Ultimately the operational environment risk assessment is responsible.

Monday, September 19, 2011

Preparing for IEC 80001 - Security

On Wednesday I will be presenting the soon to be published Security Technical Report IEC 80001-2-2 "Guidance for the disclosure and communication of medical device security needs, risks and controls". There is an assumption that the audience has an understanding of the basics of IEC 80001-1 "Application of Risk Management for IT-Networks Incorporating Medical Devices". I have been involved in the creation of this set of specifications from the beginning and provided some background here on my blog. This is an open webinar put on by GE Healthcare.

When: Wednesday, September 21, 2011, 2:00pm-3:00pm, Eastern Time,
: At Your Desk!
Please scroll down to register for Session 1

Content Summary: IEC-80001 can seem like an overwhelming process. That is why GE Healthcare brings you this educational series.

This presentation is the first in a three-part series that will discuss the soon-to-be-released set of Technical Reports that support the IEC 80001-1 Application of Risk Management for IT-Networks Incorporating Medical Devices
  • Session 1 - Security. Technical Report for the disclosure and communication of medical device security needs, risks and controls
  • Session 2 - Wireless. Technical Report for Wireless Networks
  • Session 3 - Step-by-step Risk Management of Medical IT-Networks
This session will review the anticipated Technical Report "Technical Report for the disclosure and communication of medical device security needs, risks and controls" and provide an opportunity to ask questions. Further, this session will review an informative set of common security capabilities, review input for Medical Device Disclosure Statement, and review Disclosure Statement input for Hospital Risk Assessment.
Problems with Registration? eMail ---> or call Mark Grabowski at 414.721 2805

Draft Affinity Domain Policies

I was asked to comment on the Connecticut HIE Policies. This is a really great example of the administrative work that must be done before one can really be evaluating the security and privacy needs of an HIE. These policies were written using many ISO standards and the IHE Affinity Domain planning kit. Please go to the site as they have a beautiful breakdown of the various many policies that are needed. Many people don't believe me when I say that there are many layers of policy.

These are a really good example of how a HIO can take a look at what is out there and pull what they understand while doing what is necessary to get what they need done. One thing like this came up last week during the HL7 discussion on confidentialityCodes. Connecticut was confused by the vocabulary offered by HL7, and thus wrote their own vocabulary. They actually pulled more from ISO 13606, but didn't use that vocabulary either. We were lucky enough to be able to discuss this in detail last week. It is a good thing that HL7 will be revising their documentation and vocabulary so that we can have a vocabulary that could be understood beyond one HIE.
On behalf of the Health Information Technology Exchange of Connecticut (HITE-CT) Board of Directors, I am writing to inform you that HITE-CT is currently gathering public comment on the proposed policies for the implementation of a state-wide health information exchange.
 The establishment of policies and procedures are a key component for an effective HIE and sets the boundaries for data sharing between the health information exchange and its participating partners.  Additionally, the HITE-CT policies and procedures will contribute to the efficiency and effectiveness of the HITE-CT.
 HITE-CT is currently gathering your input for the policies and procedures that will govern the practices for Connecticut’s Health Information Exchange.  There are four separate opportunities for you to comment and we hope that you will make every effort to attend or submit your comments electronically on the Comments and Resolution Form located on the website.
 These policies are now posted on the HITE-CT website and are available for public comment. The direct link to the Policies and Procedures page is  The policies may also be accessed by going to the DPH website at under featured links: Health Information Technology Exchange of Connecticut”, then click on “Policies and Procedures” located on the left hand bar menu.
 The schedule for attending a public meeting and submitting public comments on the Health Information Technology Exchange of Connecticut’s (HITE-CT) proposed policies is as follows:
September 20, 2011
8:30 AM – 10 AM
Legal & Policy Committee Meeting
(formerly known as DOIT)
101 East River Drive
East Hartford
September 22, 2011
1:00 PM – 3:00 PM
Technical Infrastructure Committee Meeting
101 East River Drive
East Hartford
October 4, 2011
8:30 AM – 10:00 AM
Legal & Policy Committee Meeting
101 East River Drive
East Hartford
October 17, 2011
4:30 PM – 6:30 PM
Board of Directors
101 East River Drive
East Hartford
 Please Note: At the October 17, 2011 Board of Directors meeting a discussion and a vote to adopt the policies and procedures will take place.
 We kindly ask that you distribute this information to anyone you think may be interested in commenting on the policies.
I would love to see more of this. It is always very important to see how a standard is understood or misunderstood so that we can make it better.

Wednesday, September 14, 2011

Standards work is motivating because it gets used and improves lives

The presentation by Dan Pink on 'what really motivates us' was posted again to Google+ today. When people ask me why I participate in standards, I have always given the answer "because it is really cool to see what you do implemented and saving lives.". Dan provides me the background as to why this works.
This is exactly why I work on Standards in Healthcare. What I create is used by others, and my motivation is to see the change. To know that peoples lives are saved, made better, made less painful, made better. This is why I blog, this is why I am so passionate at educating and outreach. This is why I will help a competitor understand something. This is why I subject myself over and over again to help people 'not reinvent the wheel.'

This is also why it annoys me when the standards organization calls these 'products' and charges $$ to use them. I want my work to be used, more than I want to be paid to create the work.This is why I am more creative working for DICOM or IHE; and more satisfied when those works are used. I think HL7 should think about this, how much more creative could the HL7 standards be?

As Dan says in the presentation, in order to get to this state one must be paid enough money to get money off-the-table. This is why I unabashedly working for GE Healthcare, and I have no problem with the fact that an organization brings together people that create using one motivation, with people who do mechanical (manufacturing) work with a different motivation, to produce value. This is what a vendor does, put together the whole-package.

Yet I have the best of both worlds, as I get to see GE Healthcare create value and deploy it; but as a standards developer I also get the pleasure out of others using these same standards.

The open-source community is missing this whole-package, yes the creative part is free (somehow the participants have achieved the money off-the-table state); but there is no-one there to finish the job. Hence why open-source doesn't dominate as it should, if one looks only at the technology (the creative part). This is niche is being filled to some degree by enterprising organizations that take the free creative-part and do the rest. But they will never 'own' their own destiny.

Sunday, September 11, 2011

You have a Right to an Access, not good or useful Access

I was approached by a peer at the HL7 meeting. She was asking my advice on how she can compel her old doctor's office to send all her records to her new doctor. The old doctor office is unwilling to do this -  more likely they are unable.  I told her that HIPAA gives her the Right to an access of her medical records. Her response was that the old hospital told her they could sell her a printed copy. I admitted that was all that the HIPAA "Right" required, and yes it allows them to charge a reasonable processing fee.

I asked about electronic access such as through a web site at the hospital (sometimes referred to as a tethered PHR), she is not offered anything like this.

She asked for them to put the records onto a CD-ROM, something that is becoming more common. They said they could not do this either. I find this one strange as Meaningful Use does check for this, even if it is poorly written criteria and badly tested.

I asked about her old doctor use of 'the Direct Project'; the short of it is that they couldn't spell Direct. This is a great geek project, but it is still too convoluted for the typical Doctor; further being PUSH only means that it only supports use-cases where the data holder knows exactly what and when to send the data to whom. Too many variables, and totally misses a large number of query/retrieve use-cases.

I ask if she has a PHR, such as HealthVault, she doesn't have one, but she checked into it and her old Health Provider doesn't support access to a PHR. I suggested that this might be her best opportunity to get electronic  access. Although it still possible she will not be able to get all the data, because the provider won't send all the data. I note that HealthVault does have the capability to take and manage it all, they are impressively capable.

There is no expected time at which this old doctor will be on a Health Information Exchange. The are confused about if this is a good thing to do or not, given the confusion around this topic in Meaningful Use. I have to agree here. I personally think that a HIE is the right solution, a it supports a huge number of use-cases. But if the USA government is going to punish them for doing it wrong; then why make any effort to do anything. I am glad that some HIEs are still being put together. I am saddened by those that have shutdown, or have changed to a Direct model.

There is simply no incentive for her old doctor to work hard at this problem. She is clearly leaving them, and thus there will be no more billable opportunities. Thus any time they take to help her out, is simply time spent with no return.

This is echoed by my family. I asked them all about how they manage their medical record and interact with their healthcare provider. Almost universally they feel they must carry printed copies, as that is the best they can get; and they are not using a PHR. Seems like the PHR 'solution' is still looking for a problem that it really solves.

I did get rather positive results on questions to my family around their use of their healthcare providers web site for accessing their lab data and scheduling appointments; their use of the internet healthcare knowledge sites (WebMD, DrFirst, etc); and if they feel empowered to question their healthcare provider. I asked these questions as I am not convinced that we can teach our way into a population that takes the initiative in healthcare decisions, a topic for which Keith has created a high-school course outline.

so as Keith says...
I wanna be an e-Patient
To be a patient like Dave
Give me my damn data!
'Cause it's my life to save
We need healthcare to get out of the mindset that the data is what is valuable. Yes it had value when it was created and initially used. But that value can't be increased unless the data flows to the next possible use. It is what the next possible use 'does with the data' that makes more value.  I hear many people saying that they know how to add value if the other guy would just share it. We must be more willing to share the data, not knowing what value will be made from it. We must be more comfortable that 'we' have the best value, that we are willing to share the data. A patient centric world, cares more about the patient than the value of data that is locked away and never allowed to ever create new value.

This is not a technology problem, it is a problem of will. There are many technology solutions laying there on the table. We need the will to deploy them.