Friday, September 2, 2011

Forged Google Cert and such - Fighting FUD

Nothing new here, more failure of the Internet Browser CA community and the model of root certificate trust that the Internet chose to deploy so-many years ago. This is very truly a big problem in the Internet Browser world. I have covered this before - SSL is not broken, Browser based PKI is. But I know that this kind of event causes widespread Fear, Uncertainty and Doubt (FUD). I want to help contain the FUD to the scope where it should be applied -- Internet Browsing.

This problem has nothing to do with the model of secure communications promelgated by IHE ATNA. Some will think that we are ok because of the use of Mutually-Authenticated TLS (or WS-Security, or S/MIME) -- Authenticating not just the server, but also the client. This is not the case, although it does protect against man-in-the-middle and some of the other negotiation vulnerabilities known in TLS.

The reason why we are in better shape is that operationally we are building an independent trust network from the internet browser trust network. In the internet trust network deployed for browsers, they (the nebulous Internet community) had to cut corners. Back when this was done, it was the right thing to do. But when we are building Healthcare Information Exchanges (HIE) we should be careful to build the PKI trust network. Yes, this is difficult, but the things we are protecting are important, and current technology is far more capable than the technology back when the Browser corners were cut. Trust is hard work. We have the opportunity to do it right, and cutting corners is a bad idea.

We MUST require that all certificates be fully validated at reasonable intervals -- Ok, this is not said this nicely everywhere, but it should be more clear. Some will note that it is just "reasonable intervals", this reasonable interval needs to be assessed based on the risk and cost of validation specifically the cost of doing the certificate revocation checking part of validation.

To be more actionable:

  1. Every use of a certificate MUST validate the certificate signature validates, date/time ranges are valid, and that the certificate chains to a trusted root. This is all local math, and is typically already done by your toolkit (Trust Store). There is little reason not to do this.
  2. Every use of a certificate should validate the 'use' of the certificate and the 'use' of the certificates in the chain (Key Usage, and Basic Constraints). For example if the certificate is being used for S/MIME, then it must indicate that it is issued for that purpose. This failure was the root of one of the vulnerabilities a few weeks ago, where Apple was found to not be validating that the certificate was being used properly. In that case a certificate was issued (signed) by a certificate that was not authorized to issue certificates. Meaning that the certificate chain should have been seen as invalid. This is listed independently because the certificate trust-store doesn't know why you want to use a certificate, so it doesn't validate the 'use'.
  3. Revocation checking should be done on a reasonable basis. This applies to end certificates, but also all the certificates in a trust chain. This would cause many network transactions to check the revocation status. This is an expensive operation, but there are ways to make it reasonable. You can use a CRL file that is pulled on a reasonable basis while each use of a certificate is validated against the CRL cached locally. This model puts the overhead back into simply math.  Because this is not done for browser based PKI, it typically is not automatically done in your toolkit, so poke around and figure out how to get it done.


Note that this needs to scale as well. There are very clear cases where risks are high and online revocation is necessary, but there are also cases where risks are mitigated by many other layers of security and revocation  checking is not as urgent (could be done only at configuration).

So the solution is simply good clean PKI - checking the certificates, the chain, and revocation. This is all part of the standards and the technology. It is simply not part of the trust model used in Internet Browsers.

Of course all the other good security stuff too. The ends of any secure conversation do need to be secure themselves. The ATNA philosophy puts this as an operational requirement to check before any node is declared a 'secure node' and issued a certificate. Said another way if a remote system has a certificate that validates then it is a 'secure node'. If it is not, then it is an operational failure.


See Also


Thursday, September 1, 2011

Reminder: IHE's ITI Domain Open Call for Proposals ends Sept 5, 2011. Detailed meeting info and important calendar dates enclosed.



Greetings IHE Community,
The Call for Proposals for the IT Infrastructure (ITI) domain will close shortly on Monday, September 5, 2011. Interested parties are invited to submit a brief proposal for new IHE Profiles and/or White Papers to be considered for development in the 2012-2013 Profile Cycle. 
This e-mail and its’ attachments detail the annual planning cycle process (#1 below), including the Planning Proposal Evaluation Kickoff Meeting in October 2011 (#3), Proposal Review Planning Webinars (#2) that will lead up to the October meeting, and the Technical Proposal Evaluation Meeting in November 2011 (#4). Please continue reading for more details on these events.
NOTE: Please open and read the attached calendar invitations and documents in this email to begin planning for the October/November IHE F2F meetings.
All Proposals must follow the structure and format of the attached brief IHE Proposal Template and must address the following items:
a)      What is the problem you propose to solve by this proposal and how is that problem expressed in practice (e.g., a use case)?
b)      How would fixing this problem improve health care in practice?
c)       What specific components of standards could be used to solve this problem?
d)      Your proposal must identify one or more potential editor(s) in the event that the proposal is selected for further evaluation and possible development. 
e)      If possible, please include some indication of the business case surrounding the situation when describing the problem. For example, is there an economic motivation for addressing this problem immediately?

Summary of IHE’s Multi-Phase Proposal Process:
1.       Submit Brief Proposals by September 5, 2011:
ITI’s Call for Proposals Opened July 19 and will close on September 5, 2011. Submit a Brief Proposal with the attached form to the domain email listed below.

2.       ITI’s Planning Committee Proposal Review Webinars- September 22, 28, & 29, 2011:
Save the Date! Authors for all accepted proposals are required to present the Brief Proposal on one of three Proposal Review Webinar(s). These webinars will be considered Decision meetings. Consensus at each webinar on the proposals’ validity and any requests for proposal authors to combine initiatives will be discussed. Proposals not will be rejected at this stage. Proposals will only be aligned with one another where appropriate.
In order to ensure voting privileges at the ITI Proposal Review Webinars we will require each member (organization) to attend a minimum of 2 (out of the 3) webinars that precede the  ITI Planning Proposal Meeting. The domain sponsors will be taking very careful attendance at each webinar to ensure fair execution. Please ensure you clearly understand this requirement.
ITI’s Proposal Review Webinars- Attendance on 2 out of 3 webinars is required to maintain/secure voting rights.
·         Webinar #1- Thurs. Sept. 22, 2011 at 9:00-11:00 am CT, (10 ET, 7 PT)
·         Webinar #2- Wed. Sept. 28, 2011 at 9:00-11:00 am CT (10 ET, 7 PT)
·         Webinar #3- Thurs. Sept. 29, 2011 at 10:00-12:00 pm CT, (11 ET, 8 PT)

3.       2012-2013 ITI’s Planning Proposal Evaluation Kickoff:
Save the Date! October 11-12, 2011 in Oak Brook, IL.* Click here to RSVP.
We urge those who submit proposals or white papers to attend the Proposal Evaluation Kickoff Meeting in person or via WebEx. In-person advocacy has proven to be the most effective way to ensure your brief proposal are understood and accepted by the committee. 

Please see attached document entitled, 2012-2013 Meeting Schedule & Information to begin planning for this event.
4.       2012-2013 ITI’s Technical Committee Proposal Evaluation Meeting:
Save the Date! November 15-16, 2011 in Oak Brook, IL.* Click here to RSVP.
Proposals that are accepted at the Planning Proposal Evaluation Kickoff Meeting and given to the IHE Technical Committee for review are required to write and present a detailed proposal during the Technical Proposal Evaluation Meeting.

Please see attached document entitled, 2012-2013 Meeting Schedule & Information to begin planning for this event.


Deadline: September 5, 2011 at 11:59 pm CT
Email the completed brief IHE Proposal template to the corresponding domain email address below before September 5, 2011 at 11:59pm CT.
Committee
Domain Email
Planning Co-Chair 1
Planning Co-Chair 2
ITI Planning Committee
Karen Witting
Michael Nusbaum

We look forward to working with you during the IHE 2012-2013 Profile Cycle. Please contact the IHE secretary at secretary@ihe.net if you have any additional questions or need further assistance.

* Only IHE International Members are allowed to attend IHE Meetings. To apply for membership click here. Please open the attached document entitled, 2012-2013 Meeting Schedule & Information to begin planning for this event.

Thank you,
IHE IT Infrastructure Planning & Technical Committee Co-Chairs

Tuesday, August 30, 2011

Proposal for confidentialityCode vocabulary

I have been complaining about the definitions of the confidentiality codes both in the active HL7 development and in my past posts: 
My main reason for not simply providing my own definitions was to allow for discussion on my concern that we have conflated the confidentialityCode meaning with consent status. My point is that the current consent status can affect all of the confidentialityCodes, not just the R or V.

I figured we should learn from the experience of the military data classification, a system that deals with very sensitive data in a different way. (Note that we are already ahead of the military in that we have a global vocabulary, take a look at the mapping mess that is military data classification).  In the case of the military classifications they use relative “harm to the country” as their measure. Yes this is different than healthcare information, but I think we can see that “harm to the patient” is what we have been discussing. Especially if we look at ‘harm’ in a broad sense that includes 
  • reputation damage, 
  • emotional damage, 
  • family relationship damage, 
  • financial damage, and 
  • physical damage (safety). 
(possibly more, I haven’t fully described patient harm in this context yet).

I think it is very legitimate to include in our definitions contemporary examples from well-known countries policies. Such as in the USA with HIPAA vs 42 CFR Part 2.

So, here is a potential draft using the existing codes, just new definitions
  • U – Unrestricted – No specific patient is identified and thus there is no patient harm risk
  • L – Low – Data has been de-identified and there are mitigating circumstances that prevent re-identification such that there is remote harm risk to the patient if the data were exposed. The data however still requires protection from exposure outside intended use.
  • M – Moderate – Data are identifiable but consists of modest clinical information that would present moderate harm risk to the patient if the data were exposed. Example include an emergency-data-set made up of non-sensitive problems, allergies, and medications.
  • N – Normal – Data are identifiable and of typical health information that would present typical harm risk to the patient if the data are exposed. This code is used for the majority of clinical information. Examples include what HIPAA identifies as Protected Health Information.
  • R – Restricted – Data are identifiable and of an especially sensitive nature that would present a high risk to the patient if the data are exposed. Examples include the data topics identified in USA 42 CFR Part 2 – “CONFIDENTIALITY OF ALCOHOL AND DRUG ABUSE PATIENT RECORDS”.
  • V – Very Restricted – Data are identifiable and of extreme sensitive nature that would present a very high risk to the patient if the data are exposed. Data in classified Very Restrictive should be kept in the highest confidence.
Just a start, feel free to take, leave, or update

Saturday, August 20, 2011

MetaData - got questions, here is my answers

The ANPRM Metadata Standards to Support Nationwide Electronic Health Information Exchange has been the focus of my blog article One Metadata Model - Many Deployment Architectures. I will now fill in my answers to the questions that are asked in the ANPRM. These are not my formal answers, but simply my take on the questions. I hope that these answers do get people thinking. I don't include links in my answers because the details are in the article One Metadata Model - Many Deployment Architectures.
Question 1: Are there additional metadata elements within the patient identity category that we should consider including? If so, why and what purpose would the additional element(s) serve? Should any of the elements listed above be removed? If so, why
The proposal defines many patient identity attributes that should not be part of the core metadata model. The metadata model should center around describing the document (object) and not around describing the patient. Yes, the metadata needs to be sufficient to link the document content to the patient. Further inclusion of the additional attributes; such as Address, Zip, Date-of-Birth, and Display Name; present both a Privacy/Sensitivity/Security concern but also present an accuracy concern. A HIE is a longitudinal record, so it will include useful documents that are 20+ years old. Further these attributes should be looked to be inside the document. They are not as valuable as metadata. Finally the current and most accurate meta information about the patient's identity is the domain of the patient identity system (e.g. PIX, PDQ, XCPD). This should not be duplicated at the document level. They are different vectors through the information space.
Question 2:  In cases where individuals lack address information, would it be appropriate to require that the current health care institution’s address be used?
When any metadata value is potentially not available there should be well defined behavior. Substituting the institutions address in place of the patient's address is a bad idea, unless the patient really is permanently living in the hospital. Specifically many metadata attributes are defined not because they are mandatory, but because when they are known there needs to be a consistent way of communicating them. Specifically Address is not an appropriate metadata at the document level as my answer to Question 1 indicated.
Question 3: How difficult would it be today to include a “display name” metadata element?  Should a different approach be considered to accommodate the differences among cultural naming conventions? 
Display Name is an attribute of the Patient Identity Domain; not the document. It should not be considered a required document metadata value.

Question 4: Are there additional metadata elements within the provenance category that we should consider including? If so, why and what purpose would the additional element(s) serve?  Should any of the elements listed above be removed? If so, why?

Provenance metadata attributes are important, but should be kept at the whole-object (Document) level. The specific attributes inside the document must show their own provenance in the context of that document. This layered approach is important for scalability and growth. Document non-repudiation through digital-signatures is a very helpful standard functionality, but should not be incorporated into the metadata model. Digital-Signatures are a layer that can be applied independent of the metadata. This does not mean that provenance values such as author be removed, these are appropriate metadata attributes. Simply separate out the metadata needs from the technology used to deliver specifically non-repudiation. More basically not all uses of data require the very high level of assurance of non-repudiation that a Digital-Signature provides. Forcing Digital-Signatures as metadata will make the model very expensive. This is the same as your correct justification of separation of the confidentiality layer.

Question 5: With respect to the provenance metadata elements for time stamp, actor, and actor’s affiliation, would it be more appropriate to require that those elements be expressed in XML syntax instead of relying on their inclusion in a digital certificate?  For example, time stamp could express when the document to which the metadata pertain was created as opposed to when the content was digitally signed.  Because this approach would decouple the provenance metadata from a specific security architecture, would its advantages outweigh those of digital certificates?

Please separate the technology of Digital-Signatures and PKI credentials from the minimal metadata used for authenticity and integrity protection. Some uses will need minimal controls, while other uses will demand Digital-Signatures. By separating, you enable multiple policies. Knowing the origin of a document is a fundamental query parameter, not necessarily only needed for non-repudiation.

Question 6: Are there additional metadata elements within the privacy category that we should consider including? If so, why and what purpose would the additional element(s) serve?  Should any of the elements listed above be removed? If so, why
The metadata model should be describing the object (Document), not trying to duplicate the Privacy or Security layers. Privacy and Security policy will leverage all of the metadata provided. Sometimes a privacy policy will request that a specific document be tightly controlled, it will do this by referring to the document unique ID. Other times a Privacy policy will tightly control an episode of care, through the object's time/date ranges. The privacy and security policies are part of the Access Control design layer. These do not need to be duplicated in a metadata model, but rather the metadata model needs to include sufficient metadata to enable Access Controls. The identified Data-Type and Sensitivity are good examples.


Question 7: What experience, if any, do stakeholders have regarding policy pointers?  If implemented, in what form and for what purpose have policy pointers been used (for instance, to point to state, regional, or organizational policies, or to capture in a central location a patient’s 27 preferences regarding the sharing of their health information)?  Could helpful concepts be drawn from the Health Information Technology Standards Panel (HITSP) Transaction Package 30 (TP30) “Manage Consent Directives?”  
Having the data point at the policy does not scale as objects age. You already enable individual objects to be controlled through having a unique identifier for the object. This is a much more sustainable model. Note that the document already discusses using layers of functionality, such that a wrapping layer (security layer) can include the policies that would need to be met before that layer allows the data to be unwrapped. So, please separate the layers and keep the metadata layer as attributes describing the object (document). I am advocating the model defined in TP30, that is separation of Privacy Policies from Access Control from the objects they protect.


Question 8: Is a policy pointer metadata element a concept that is mature enough to include as part of the metadata standards we are considering?  More specifically, we request comment on issues related to the persistence of URLs that would point to privacy policies (i.e., what if the URL changes over time) and the implication of changes in privacy policies over time (i.e., how would new policy available at the URL apply to data that was transmitted at an earlier date under an older policy that was available at the same URL)?
See answer to 6 and 7. Policy pointers are not appropriate at the object metadata layer.  Policy is a different layer. 


Question 9:  Assuming that a policy pointer metadata element pointed to one or more privacy policies, what standards would need to be in place for these policies to be computable?
There is a lack of current standards for encoding privacy and security policy in a interoperable and computable form. In the mean time we leverage vocabulary such as confidentialityCode, and regional vocabulary for consent types (BPPC).


Question 10: With respect to the privacy category and content metadata related to “data type,” the HIT Standards Committee recommended the use of LOINC codes to provide additional granularity.  Would another code or value set be more appropriate? If so, why?
The use of LOINC might be sufficient. A USA Realm management of the codes used for metadata 'data type' would be a good mechanism to build. This was a positive output from HITSP, but needs to be further refined and managed. The actual codes used will evolve over time, and there needs to be consideration of this evolution. However the full LOINC vocabulary may be too fine-grained and present a privacy violation. We need to be careful to balance the needs to discover/describe with the needs to protect.


Question 11: The HIT Standards Committee recommended developing and using coded values for sensitivity to indicate that the tagged data may require special handling per established policy.  It suggested that a possible starter set could be based on expanded version of the HL7 ConfidentialityByInfoType value set and include: “substance abuse; mental health; reproductive health; sexually transmitted disease; HIV/AIDS; genetic information; violence; and other.” During this discussion, several members of the HIT Standards Committee raised concerns that a recipient of a summary care record tagged according to these sensitivity values could make direct inferences about the data to which the metadata pertain.  Consistent with this concern, HL7 indicates in its documentation that for health information in transit, implementers should avoid using the ConfidentialityByInfoType value set.  HL7 also indicates that utilizing another value set, the ConfidentialityByAccessKind value set which describes privacy policies at a higher level, requires careful consideration prior to use due to the fact that some items in the code set were not appropriate to use with actual patient data.  In addition, the HIT Standards Committee recommended against adopting an approach that would tag privacy policies directly to the data elements. What kind of starter value set would be most useful for a sensitivity metadata element to indicate?  How should those values be referenced?  Should the value set be small and general, or larger and specific, or some other combination?  Does a widely used/commonly agreed to value set already exist for sensitivity that we should considering using?
 The data classification for sensitivity is an important metadata value. It needs to be sufficiently varied to allow for proper segmentation, but also sufficiently broad so as to not expose privacy. This is not to say that metadata be restricted to non-sensitive values, but rather that limiting the risk should be considered. Specifically the ConfidentialityByInfoType is a very bad value-set for exposure outside a controlled environment. This value-set was defined in HL7 for purposes of policy encoding, not use as metadata. The metadata values in the ConfidentlityByAccessKind is defined for interoperability. This poor documentation by HL7 has been identified earlier this year and the HL7 committees are in the process of correcting the documentation. Part of this documentation will be a clarification of the proper uses of each value-set. The other part will be a more clear differentiation of the purpose of confidentialityCode vs other attributes that are used by Privacy Policy and Access Control enforcement such as author, time, unique identifiers, authentication, user-role, etc.


Question 12: In its recommendations on privacy metadata, the HIT Standards Committee concluded that it was not viable to include the policy applicable to each TDE because policy changes over time.  Is this the appropriate approach?  Are there circumstances in which it would be appropriate to include privacy preferences or policy with each data tagged element? If so, under what circumstances? What is the appropriate way to indicate that exchanged information may not be re-disclosed without obtaining additional patient permission? Are there existing standards to communicate this limitation?

Please separate out the Privacy Policy functionality from the Object metadata. These are separate domains. They are related and function as layers for scalability.

Question 13: With respect to the first use case identified by the HIT Policy Committee for when metadata should be assigned (i.e., a patient obtaining their summary care record from a health care provider), how difficult would it be for EHR technology developers to include this capability in EHR technology according to the standards discussed above in order to support meaningful use Stage 2?  

The definition of metadata given is not sufficient to assure interoperability. I recommend that the Metadata definition foundation be the XDS Metadata, with a USA Realm vocabulary bindings. In order to assure interoperability the XDS Metdata must also be bound to transport. This is the role of the XDS, XDR, XDM, and XCA profiles - but the XDS Metadata can also be bound to other transports or API. The binding to these transports is specific to their environment of use. The use of XDS Metadata in the context of XCA is already in practice as part of the NwHIN-Exchange. The use of XDS Metadata in the context of XDM (e-mail media) is already in practice as part of the Direct Project. The use of XDS Metadata is common between these two NationWide projects, and is the basis of the common XDR protocol between these two projects. Under the XDM profile there is an encoding for use on USB-Memory Drives and CD-ROM. There is now a supplement that shows how encryption is handled in all of these environments including a new profile for transport agnostic encryption.

Question 14: Assuming we were to require that EHR technology be capable of meeting the first use case identified by the HIT Policy Committee, how much more difficult would it be to design EHR technology to assign metadata in other electronic exchange scenarios in order to support meaningful use Stage 2? Please identify any difficulties and the specific electronic exchange scenario(s).

See answer 13: The use of a common metadata model is very important to enable interoperability, privacy, security, and safety. Metadata is more than a transaction specification, but a factor in the longitudinal use of that data. Metadata needs to consider object types beyond HL7 CDA. DICOM has a document defined by their Structured Report specification. There are many who continue to use unstructured documents in PDF form (e.g. EKG report). There are others using CCR. There are documents that are based on W3C (Digital-Signatures). There are documents based on OASIS (Workflow). There are others that might be using a totally new form. The Metadata defined in XDS was derived from CDA but distanced it-self from CDA to allow for other document types. In this way the XDS metadata needs only that there be a MIME-TYPE defined for the document. If a CDA document, or CDA Header fragment was used, there would be significant overhead for very little value.

Question 15: Building on Question 14, and looking more long term, how would the extension of metadata standards to other forms of electronic health information exchange affect ongoing messaging and transactions?  Are there other potential uses cases (e.g., exchanging information for treatment by a health care provider, for research, or public health) for metadata that we should be considering?  Would the set of metadata currently under consideration support these different use cases or would we need to consider other metadata elements?  

Over time we need to recognize that patients are free to move globally. Thus a metadata model needs to consider the patient as the center in an environment that is beyond the USA. The XDS Metadata model is being globally adopted.

Question 16: Are there other metadata categories besides the three (patient identity, provenance, and privacy) we considered above that should be included?  If so, please identify the metadata elements that would be within the category or categories, your rationale for including them, and the syntax that should be used to represent the metadata element(s).

Metadata categories are better described as uses of metadata. This is to say that the different needs drive a set of metadata. Each metadata attribute tends to have many uses. A good example of this is the use of protecting privacy, which leverages just about all metadata values.
Question 17: In addition to the metadata standards and data elements we are considering, what other implementation factors or contexts should be considered as we think about implementation specifications for these metadata standards?  
Metadata must also be bound to an encoding, this is typically specific to the transport. For example in the use of XDS Metadata as bound to XDS, XDR, XDM, and XCA.

Question 18: Besides the HL7 CDA R2 header, are there other standards that we should consider that can provide an equivalent level of syntax and specificity?  If so, do these alternative standards offer any benefits with regard to intellectual property and licensing issues?

Please re-assess the XDS metadata. It was created through a global initiative over many years of analysis, prototyping, and implementation. IHE started with the evaluation that the CDA Header had the right elements, which seems to be a common understanding expressed in this ANPRM. Yet the CDA Header is not laid out to be Metadata, and is restrictive of the content type. Most important is to separate metadata from privacy/security policy and enforcement.
Question 19:  The HL7 CDA R2 header contains additional “structural” XML elements that help organize the header and enable it to be processed by a computer.  Presently, we are considering leveraging the HL7 CDA R2 header insofar as the syntax requirement it expresses relate to a metadata element we are considering.  Should we consider including as a proposed requirement the additional structures to create a valid HL7 CDA R2 header?
The use of the CDA header is overly exhaustive, and yet the encoding of the attributes as defined by CDA is not necessarily the proper encoding for metadata. Being pure XML is not always the right solution.
Question 20: Executive Order (EO) 13563 entitled “Improving Regulation and Regulatory Review” directs agencies “to the extent feasible, [to] specify performance objectives, rather than specifying the behavior or manner of compliance that regulated entities must adopt;” (EO 13563, Section 1(b)(4)).  Besides the current standards we are considering, are there performance oriented standards related to metadata that we should consider?
I agree that regulations should be more performance related, for example focusing healthcare advancements on better outcomes. However when defining an Interoperability layer exacting detail needs to be specified. This allow the communicating systems to be developed in isolation and yet fully interoperate. It is the outcome of the interoperability that should be measured through performance. That is to say that the goal is not interoperability or metadata; the goal is to provide better outcomes through some proven workflow that needs interoperability.

Conclusion
I am very pleased with this ANPRM. Although I disagree that the CDA Header is the solution, there is much that is right. My main concerns are that there is too much reliance on CDA vs an independent metadata definition that can handle other objects; there is too much expectations that the patient identitiy description be included in the metadata;  and that privacy policy is too tightly bundled.

I have been involved in many metadata discussions including the derivation of the XDS metadata. I learned alot during these experiences and was fascinated at the combined knowledge that was used to create the XDS metadata model. I am not alone in lamenting the unfortunate choice of ebRIM for this metadata model, but it was the best standard available at the time. The model is still the right model. Further the model has been applied to the various HIE deployment architectures (XDS, XDR, XDM, XCA), and could be applied to others as well. See One Metadata Model - Many Deployment Architectures

Friday, August 19, 2011

IHE IT Infrastructure Technical Framework volumes and supplements published

The IHE IT Infrastructure Technical Committee has published the following Technical Framework volumes as of August 19, 2011:
·         Volume 1 (ITI TF-1): Integration Profiles
·         Volume 2 (ITI TF-2): Transactions (volume 2 is divided into three separate sub-volumes)
o   Volume 2a (ITI TF-2a): Transactions ITI-1 through ITI-28
o   Volume 2b (ITI TF-2b): Transactions (cont.) ITI-29 through ITI-64
o   Volume 2x (ITI TF-2x): Appendices A through W and Glossary
·         Volume 3 (ITI TF-3): Section 4–Cross-Transaction Specifications and Section 5–IHE Content Specifications

Newly made final text: Async Web-Services, XCA, MPQ, PIX v3, PDQ v3, and PDO. The documents are available for download at http://www.ihe.net/Technical_Framework.

The Committee has also published the following supplements to the IHE IT Infrastructure Technical Framework as of August 19, 2011:
o   Cross-Community Fetch (XCF) - Published 2011-08-19
o   Cross-Community Patient Discovery (XCPD) - Revised 2011-08-19
o   Cross-Enterprise User Assertion - Attribute Extension (XUA++) - Revised 2011-08-19
o   Document Encryption (DEN) - Published 2011-08-19
o   Healthcare Provider Directory (HPD) - Revised 2011-08-19 
o   On-Demand Documents - Revised 2011-08-19
o   Retrieve Form for Data Capture (RFD) - Revised 2011-08-19
o   XAD-PID Change Management (XPID) - Published 2011-08-19
o   XDS Metadata Update - Revised 2011-08-19

These profiles will be available for testing at subsequent IHE Connectathons.  The documents are available for download at http://www.ihe.net/Technical_Framework.

Comments on all documents can be submitted at http://www.ihe.net/iti/iticomments.cfm.

HIT Standards Committee NwHIN vs Direct maturity chart

The view of HIT Standards maturity and adoption is one of the things that HIT Standards Committee has discussed this week. This is fantastic update since the original that I blogged about in July. Please see the John Halamka summary of the HIT Standards August Meeting for all the things that happened. The specific section was what Dixie presented.
Dixie Baker presented the preliminary recommendations for building blocks that support data exchange in both "push" and "pull" models.   The key innovation in Dixie's work is the process for reviewing existing standards for appropriateness, adoption, maturity, and currency. 
The stated charge of this PowerTeam is:
“Using the NwHIN Exchange and Direct Project specifications as primary inputs, recommend a modular set of transport, security, and content components (“building blocks”) that can be selectively combined and integrated to enable the trusted exchange of content in support of the meaningful use of electronic health record (EHR) technology”
I commented strongly during the first presentation of these charts in July, and blogged about it. I would like to believe that it was my blog that caused a 'PowerTeam' to be created to re-examine it. This PowerTeam met on the 11th (3 members if I remember correctly), where I again provided very detailed comments online. I was contacted directly understand and resolve these comments. I also participated in some NwHIN-Exchange meetings where comments were developed and delivered to the PowerTeam. Ultimately trying to provide evidence that the NwHIN-Exchange specifications were more mature than they were being portrayed.

So, on the 17th I expected the slides to be perfect. Well they are better. In fact I think they might be as good as can be expected at this time. I think that further adjustment can only be influenced by a new group of people, that is not Vendors or Consultants. I can't even fault ONC for this fact. ONC should be skeptical of  'facts' that come from Vendors and Consultants.  They want and deserve facts from Hospitals and National/Regional/Local Health Information Exchanges.

I expected that when John Halamka first introduced this chart that the purpose was to show that Direct was more mature and better accepted than Exchange. It turns out that many members of the NwHIN-Exchange and CONNECT have been providing strong comment and evidence in support of Exchange. The EHRA has also provided input in their White paper on Health Information Exchange types. The result is that the charts now shows that they are almost dead even. However the FACA committee is still taking as more authorative the ‘opinion’ of a few members over ‘facts’ provided by outsiders. Ultimately more adjustments will be made. Ultimately the decision will be, and should be, that both specifications need to be endorsed. More implementer's of NwHIN-Exchange need to speak up.

I don't like their feeling that they get to ‘eliminate’ or ‘reconsider’ specifications. The existence or continued ‘consideration’ will be based on market need, not the opinion of 3-4 people. I am not saying that (from page 12 - not shown here) the access consents or HIEM are good specifications, they are not. However in the case of the Web Services Registry as suboptimal, better alternatives are not yet available; they are in the works and will eventually replace (UDDI).

The NwHIN-Exchange folks were very upset at where Authorization Framework landed, as it is critical for Query/Retrieve patterns. They will surely continue to push a better evaluation of this. There is a perception by a few that the Authorization Framework is much harder than it actually is. The specifications mentioned in the 'consider' category are content or uses; so shouldn't have been evaluated.

I have been asked why XDS isn't included - XDS has never been a part of NwHIN-Exchange. The NwHIN-Exchange is about federating local/regional exchanges into a nationwide exchange. This federation is the role of XCA. There has never been any hint of how one might build a local/regional exchange. However as you likely observed, the XCA Query and Retrieve transactions are derived (by IHE) from the XDS Query and Retrieve transactions. Thus a system that knows how to interact with XDS, knows how to interact with XCA. This is a design principle of XCA. So this is normal, and expected.

What is lost in the chart is that XDR is a common thread between Direct and Exchange. The XDR protocol is being heavily used in Exchange, especially by SSA. The XDR protocol holds a special place in the Direct project as well, as it is tangentially endorsed through the specification that shows how to bridge Direct and XDR. Thus “Document Submission” should be recognized as XDR and re-assessed to mature.  

From EHRA White paper
Of note, from an EHR perspective, if you support XCA Query/Retrieve and XDR PUSH; you fully support XDS. This does not mean that you have XDS infrastructure, that is a big operational aspect. But it does mean that if you have tested your EHR against XDR and XCA that you are XDS compliant. The advantage of XDS is that it identifies a set of services that would be hosted centrally as high-availability; allowing clinics to be off-line at night and weekends. There should be a white paper from IHE on how to make a Regional Health Information Exchange using the XDS family (PIX, PDQ, ATNA, XUA, BPPC, CT, and maybe more). This all does tie nicely into the need for One Metadata Model - Many Deployment Architectures
The only aggravating thing in the whole presentation (page 9-11) is that for every NwHIN-Exchange specification is an alternative of REST or Direct (which I disagree is possible); while the Direct specifications do not include the alternative of XDR (which is proven as being an alternative). 

Conclusion
This is a huge improvement, so much so I have very little that I would request be changed. The biggest recommendation is to get those implementing NwHIN-Exchange to speak up. Surely there are NwHIN-Exchange partners that can show real results. I know of large providers and regional health information exchanges that are planning on using the NwHIN-Exchange independent of the NwHIN-Exchange. These positive uses of NwHIN-Exchange need to be brought forward.  I have expressed my knowledge and opinion; it has been recognized and influenced as far as it is going to.

Monday, August 15, 2011

IHE Educational Webinars

Update: The recordings of the webinars described below are now available.

The IHE Privacy and Security webinar is now scheduled and starts this week. I somehow missed the overall notification of IHE webinar series. They update the schedule on an as needed basis and don't send out specific updates. There are many good webinars available.

What you need to know is when the IHE Privacy and Security Webinar will be. It is broken into two parts:


Security and Privacy Overview – Part 1
    • Wednesday, August 17, 10:00am — 11:00am CDT
    • Register
    • Speaker:
      • John Moehrke — GE Healthcare

Security and Privacy Overview — Part 2

    • Wednesday, September 7, 10:00am - 11:00am CDT
    • Register
    • Speaker:
      • John Moehrke — GE Healthcare

Now if you are a regular reader of my blog, you have already seen the webinar through my blogging of it. I do still encourage you to attend in person so that I can interact with someone... I presume there will be recordings, I will post links to those when they are available.