Ask me a Question

I look at and respond to Comments anywhere on my blog, but I recognize that some don't like google's requirement for google account. Thus you can also send me a question to my gmail address which simply is my name - JohnMoehrke

The rules are simple:
  • Topics I'll cover include anything in my banner above...
  • All questions and suggestions posted are subject to this Blog's Policies.  
  • If I don't know or cannot otherwise answer your question, I'll let you know.  
  • Questions are not necessarily answered or addressed in the order received.

25 comments:

  1. What wine best goes with DICOM?

    ReplyDelete
  2. Dave, I would suggest MD 20/20. Only because it is really strong, and clearly the "MD" stands for Medical Doctor.

    ReplyDelete
  3. Do you know of any good introductory training resources for HealthCare IT workers to better understand IHE profiles/transactions in general? Beyond the documentation on IHE.net.

    ReplyDelete
  4. Roger,
    Thanks for your question, I have outlined what I know of http://healthcaresecprivacy.blogspot.com/2011/12/introduction-to-ihe-profiles.html

    ReplyDelete
  5. DO you know of any policies/procedures/guidance that is used when it is necessary to unmerge patient data in an HIE?

    ReplyDelete
    Replies
    1. Emma, I don't know of policies. When the topic comes up in standards or profiling circles most everyone gets really scared. This because unmerge couldn't possibly be done automatically. It is a totally manual process as one must really examine the data itself to determine what went wrong. This is one reason that standards/profiles tend to prefer link given that unlink is easier to automate.

      From a security perspective the operation must still be done by an authorized individual, but is likely done through proprietary user interface, or even database tools.

      From a privacy perspective one must first log the action in a way that is logical to any future privacy reporting. Then one must also check the audit logs to see if there was any inappropriate accesses while the records were improperly merged/linked. This is clearly more manual processing.

      Delete
  6. John,

    this may be too technical and/or specific to spend your time with. Anyway:

    I have just started with the IHE DEN profile (Version August 19, 2011).

    My use case is to encrypt a document in the Document Repository without knowing about
    the expected recipient in advance. It is a part of the access control concept
    that only recipients granted a permission will receive the decryption key for a particular
    document.

    My idea was to use a symmetric content encryption key, but all the algorithms the profile
    supports for confidentiality purposes appear to be asymmetric ones (3.32.4.1.6-1).

    Is there a principal obstacle in using DEN with a symmetric content encryption algorithm as well?

    I will greatly appreciate any help on this.

    Best Regards

    Marek

    ReplyDelete
    Replies
    1. Marek,

      The profile does support "Shared Symmetric Key" see section 3.32.4.1.6.4.2.

      "
      The shared symmetric key method applies symmetric encryption to deliver the content encryption key to a recipient. The symmetric key can be pre-shared or involve key retrieval, both of which are out-of-scope of this transaction. Actors that use this method are assumed to have some kind of key management infrastructure in place supporting symmetric keys.
      The shared symmetric key method uses symmetric key-encryption keys (KEKRecipientInfo) as CMS RecipientInfoType. Portable Media Creator and Portable Media Importer actors shall support AES key wrap algorithms (see Table 3.32.4.1.6-1). CMS mandates that the key length for the key encryption key minimally has the length of the content encryption key.
      "

      It is in there with a justification of use-cases just like you describe. Where a document might be published as an encrypted document. Retrieval of this encrypted document would be lightly protected as it isn't really exposing anything (lightly protected because risk never goes to zero). At some later time an authorized recipient uses some non-defined protocol to request authorized access to the key. This key is shared with that authorized recipient, so that recipient can decrypt the document. Note that the recipient must be fully trusted as they now have the unencrypted document AND the shared key.

      Note very similar can be done with password, it is hardly any different if you use a sufficiently long and complex password. And passwords might be more easy to control and communicate.

      Delete
    2. John,

      thanks a lot for the explanation!

      However, I am a bit confused about the section you quoted. In my understanding, the methods described in 3.32.4.1.6.4.1, .2, .3 say, how to protect the 'content encryption key' during its distribution. The way I read 3.32.4.1.6.4.2 is that the "symmetric key" is just a "key for the key". "password for the key" and "PKI interaction for the key" are the alternatives.
      3.32.4.1.6.4.2: "The shared symmetric key method applies symmetric encryption to deliver the content encryption key to a recipient."

      My question was: Can the content encryption key be a symmetric key as well? Does this approach have some fundamental weakness?

      Best Regards
      Marek

      Delete
    3. Marek,

      Now I understand.

      Bulk encryption, the encryption of the content, will always use a symmetric encryption algorithm. This is because bulk encryption is potentially over a very big 'bulk' and therefore needs to be very efficient. The algorithms used for bulk encryption are efficient, and yet they are very robust to attack. They get their strength from the very random keys that they use and specifically designed algorithms. But they are fast and efficient, and therefore are used on the bulk data.

      The problem is that the randomly chosen key now needs to be communicated and that is what the asymmetric algorithm is used for. The speed and CPU overhead of the asymmetric algorithm is reasonable given how relatively small the blob of data is to be encrypted. It is true that all the algorithms listed are simply used to hide this symmetric algorithm key (blob). These choices are for user-interface, ease-of-use, or other key-management reasons.

      Delete
  7. John, we are having a discussion with a client regarding OpenID vs. SAML for authentication as it relates to HIPAA. So, my questions are:

    1. Is one more widely recognized in the medical/healthcare community?

    2. Is there technically much difference in implementation, etc...

    3. What are the biggest drivers for using either one?

    4. If you were putting in a system which would you use?

    Thanks for any thoughts or comments or reference links as it relates to HIT standards.
    Steve

    ReplyDelete
    Replies
    1. The reality is that the answer to this is not a clean and decisive answer. Further it isn't based on much actual use, but rather on generalizations. Generalizations of what is happening outside of healthcare, but also what is happening within healthcare.

      The background is that both OpenID (based on OAuth) and SAML are mostly functionally the same. They are not the thing that authenticates the human, they are the thing that tells a relying party (server) that the human has been authenticated. In so doing they are both Federated Identity systems.

      SAML is seen as the more formal one. It is based on an open standard that is fully ratified and final. There are many mature toolkits, and implementations. It has lots of flexibility, yet profiles small relatively well. It can be used for all levels of identity-assurance. Most 'enterprise' class user directories include support for SAML (e.g. Microsoft Active Directory).

      OpenID and OAuth are based on implementations of something that seems to work, but they are struggling with documenting what it is that seems to be working. Said another way, this is NOT YET STANDARD. Indeed there are strong arguments over what is and is not. BUT, OpenID is potentially easier to deploy in a pure web based environment, and may be more friendly to lowend devices such as tablets and phones. OpenID is seen as the solution for internet services such as Google, Facebook, Twitter, etc. It can easily handle these very-low-assurance-identities.

      Note that SAML has profiles for doing web-based authentication, that ultimately are not much different than OpenID. Just not as much time spent to make them as slick as Facebook, Google, or Twitter.

      I have first-hand experience with SAML, as it is the tool of choice by GE across the whole company, and specifically when we use external services such as Sabre for flight booking. We use it in a total web environment. When I hit a web-site that is protected, I get redirected to the SAML identity provider authentication, and then redirected back; that is unless I have cached credentials that are fresh enough.

      The big difference from my perspective, beyond that OpenID is not yet a frozen standard, is that SAML has far better support for carrying more than just a User-ID. A SAML assertion can carry roles, purpose-of-use, authorization pointers, etc. Such as profiled in IHE-XUA.

      Lastly, these two are not in conflict. It is very reasonable to use OpenID for end-user devices, especially BYOD. At the web-server for all backend convert the OpenID credentials into SAML... The WS-Trust protocol is available for this, and is also highly supported.

      My preference is to stick with SAML. The problem is that most of the SAML solutions don't scale down to a 5-doctor-office. Then again, that office is likely to be using a cloud service for their EHR anyway.

      Delete
  8. Hello John,

    we have following issue. We have a CDA documents having references to external resources such as XSLT. The real problem is that the XSLT could be changed in future and if we compute signature for the XSLT it is not valid any more. The requirement is that the XSLT is accessed via URL (located somewhere external to the CDA). By creating signature it is required that the document is transformed using an XSLT (only what is seen should be signed - w3c principle).

    Do you have any expiriences with resources referenced from a CDA document?

    Thanks and best regards,
    Erno

    ReplyDelete
    Replies
    1. There are a couple of observations/questions here:
      a) When someone signs a CDA are they signing the representation that they see or the underlying data?
      b) How do you sign a sub-set?
      c) How does one know what they are signing when externally referenced information is included?
      d) How can one sign using a style-sheet when the style-sheet could change in the future?

      I can't really address the 'resources referenced from a CDA document'. I just don't know CDA well enough to understand it. However I don't think that I would expect a signature on a CDA document to implicitly be signing anything not inside the CDA document. In the context of IHE-DSG profile, you can/should bring those referred to files into the manifest, and thus sign them at the same time.

      My recommendation has always been to use whole document signature ONLY. To sign the CDA and not the style-sheet or a transform. This recommendation is based more on issues with partial signatures: known d-sig vulnerabilities, user experience, and long term management problems.

      But that recommendation doesn't really answer the question.
      1) If you must sign partial parts of the CDA, then you must control the transforms used. If these are not controlled then the signature will be seen as invalid. Which it should be considered invalid, since the validity can't be confirmed. The IHE-DSG can somewhat handle this, just add the transform to the manifest and thus it is signed too. The problem is you have to get creative with the Reference element. The IHE-DSG tells you how to create a URN out of a registered document; you need a URI to your transform.
      2) If you must include the style-sheet used to display, then it must be signed also. Same answer on URI to your style sheet.
      3) user display of what is being signed is indeed difficult, and up to application validation. The signing system must be fully vetted with a representation of the user base that will be using it. This is a task that is beyond the technology. This gets into operational proofing, which is usually used, especially on the signing side. This involves all the User Experience type testing and validation.

      Delete
  9. Hi John,
    I'm trying to work through the details regarding electronic transport for MU2 (i.e. direct) and I've heard some confusing information from different sources that I'm hoping you can clarify.

    Specifically, the optionality between SMTP+XDM and XDR; Can a certified system pass the requirements providing just Direct XDR transmission/reception to/from a HISP? Must we _also_ support (and demonstrate that support) for SMTP+XDM including S/MIME signing & encryption? Ideally, we would like to only have to use XDR and I originally thought that was sufficient to meet the criteria but now I'm not so sure. Can you clarify?

    ReplyDelete
    Replies
    1. My understanding is that under Certification, yes the Certified EHR Technology MUST show that they can comply with Transport (a). This is without question true. This means that it MUST be able to send and receive SMTP-S/MIME email according to the Direct specification.

      The (a)+(b) and (b)+(c) are options to be certified. However if the healthcare provider uses (a)+(b) or uses (b)+(c); they must be using Certified versions of them.

      I think that the most re-usable technology solution is the (b)+(c), which in my view is XDR. This XDR capability can be used point-to-point, or can be used to communicate with an independent HISP that converts the transactions to Direct(the actual topic of the document referred to in (b) Transport)

      The result of this is that: Technology needs to be certified for each functionality that a customer might use. But the Technology MUST be certified for (a) regardless of if the customer ends up using that.

      Delete
  10. Hello John,

    I have a question on the level of detail required for the Audit Reports. Is it necessary to record patients returned on reports that are generated? For example “show me all patients taking Lipitor”. This can be a large list and I am concerned about the number of entries that would need to be logged.

    Thanks,
    Shelly

    ReplyDelete
    Replies
    1. Shelly,

      The use-case you bring forward, regarding 'how many patients taking Lipitor' is not a Security or Privacy use-case. Thus I would forbid this use-case from utilizing the Security/Privacy audit log. Further this level of detail would not, should not, be found in the Security/Privacy audit log. The Security/Privacy audit log would not contain clinical details, it might point at a lab report or prescription that indicates this information, but it clearly would not include any hits on the word 'Lipitor'.

      There needs to be very clear separation between the Security/Privacy audit log (the raw log) from Clinical data and even from system-event-logs that might be used for system problem analysis.

      Delete
  11. We are building a FHIR based Statewide service For New York DOH. One of the service or operation in our list is to implement the PDQ and PIXm Services. For PDQ we found the request and response template from ihe.wiki.

    We are not sure about the response template for PIXm. Could you please help with same and any additional guidance on this will be appreciated.

    ReplyDelete
    Replies
    1. The current specification from IHE for PIXm is based on FHIR R4.
      https://www.ihe.net/resources/technical_frameworks/#IT

      Patient Identifier Cross-reference for Mobile (PIXm) – Revised 2019-12-05

      You can find Operation Definition and Capability Statements on the IHE GitHub repo dedicated to FHIR conformance resources
      Current IHE GitHub repository for FHIR Conformance resources --https://github.com/IHE/fhir

      I put the corrected examples on the IHE ITI GitHub repo dedicated to examples (The FTP site is about to be shutdown)
      https://github.com/IHE/ITI-Info

      They will likely be an IG built and available from IHE in the future.

      Delete
  12. Hi John.. Can you please put some light on the interoperability challenges with respect to implementing the Patient identity management IHE profiles like PIX, PDQ (v2, v3 and FHIR variants)

    ReplyDelete
    Replies
    1. The IHE profiles for Patient Management are solutions to interoperability problems that would exist without them. Prior to these profiles there was variations in how a client would request a patient identity lookup by demographics or by identifier.

      When one looks at FHIR, it is less obvious that this interop problem would exist without the PDQm or PIXm implementation guides; and that likely is an unknown. The FHIR standard is much more obvious on how things should be done than previous HL7 standards like v2 and v3. So from this perspective, the PDQm and PIXm might not seem to be needed. This might be true, but it is still helpful at a abstract perspective to see that the problems we had in v2 and v3 days do have a solution in FHIR.

      With the family of implementation guides (PIX*, PDQ*) we still have interoperability issues that need to be resolved. The class is not strictly interoperability, but rather policy. However a failure to have policies or for the policy to support the need, is still a failure and certainly a failure of interoperability. The policy problem in this case is the region / community defined policy on what elements must be supported, what elements must be captured, what elements must be part of a query, how to handle non-perfect matches, how to handle complete failure to match, are multiple results allowed, who is authorized to feed, who is authorized to query, where is audit logs stored and what can be done with them, etc...

      with interoperability, policy is the biggest unknown. In all interoperability cases there likely is a technical interoperability specification, but it is unusual for there to be an agreed (internationally) policy.

      Delete
    2. Thank you John for the response.
      In PDQ query message QBP^Q22, the MSH-5 field specifies the domain information where the search should be done by the PDQ supplier. What is the significance of again providing the domain information in QPD-8 segment ?

      Delete
    3. I don't know anything (mostly) about HL7 v3 message encoding.

      Delete
  13. This comment has been removed by the author.

    ReplyDelete