Friday, December 30, 2011

Introduction to IHE profiles

I got my first real "Ask me a Question": 
"Do you know of any good introductory training resources for HealthCare IT workers to better understand IHE profiles/transactions in general? Beyond the documentation on "
The formal specifications are all on the IHE web site known as the "Technical Frameworks", "Supplements", and "White Papers". This site is not very helpful, it is simply a list of the documents that could be downloaded. These are the formal specifications, so ultimately you must understand them.

There is a more helpful listing by profile available on the IHE Wiki. Each of these are linked to a short description of the profile which includes specific pointers into the formal specification. I tend to point at these more often than the formal specifications simply as one can point at a profile, rather than a volume fragment of a profile in a library (aka Technical Framework). 

The most useful 'training' is the many overview webinars of the profiles. Most of these webinars were recorded and the recording is available. All of them have the PDF version of the slides listed. Unfortunately they are listed by the date that the webinar was given. 

For the ITI committee, which I am a member, they have these webinars further documented on the wiki. On this site is the Powerpoint version of the slides.

For the Privacy and Security webinar, I have further expanded this on my blog as a bloginar.

All of these are available from IHE. I must say that I am not aware of training beyond this. I would guess that there is some out there. I just simply have pushed all my efforts to create training into IHE for global and free publication. 

Wednesday, December 21, 2011

Predicting Meaningful Use Stage 2 Security

As a member of the HIT Standards ‘Privacy & Security' workgroup I do have first-hand experience with the discussions and potential. The workgroup met earlier this year to discuss the proposed Meaningful Use stage 2 security and privacy criteria. The workgroup was given a set of criteria to comment on, so was not given a clean slate to add criteria of its own. Generally the workgroup softened yet focused the criteria (See detailed table output). There was clear recognition that Meaningful Use stage 1 criteria were not well understood. Here is a list of some of my articles, which are still in the top 10 popular articles:
The HIT Standards 'Privacy & Security' workgroup added to the criteria pointers to existing standards that explains the criteria in general IT language. Most of the time it pointed at NIST 800-53. The recommendations were presented to the full HIT Standards committee in October.

Six adjustments:
The first criteria is for secure messaging with patients. Specific to the security functionality: the message must be encrypted, authenticable, and audited. The Privacy & Security workgroup tried really hard to stick purely to the security functions without getting tied into implementation specifications. It indicated that both NwHIN-Exchange and Direct were acceptable (Regulatory coexistence of Direct and Exchange). It is not known if HHS/ONC will keep this criteria general, or get more specific. Something that came up later in side conversations was to address how an EHR could be so sure that an endpoint it was communicating with was a specific Patient. Thus the need for further analysis on authenticating patient identities outside of direct treatment scenarios (Patient Identity Matching).

The second criteria is to assure that documents that are created include data-provenance information. This is a direct response to concerns that when importing documents, that the patient provides, that there is a need to identify the original author. Unfortunately the security criteria is not a strong non-repudiation (IHE - Privacy and Security Profiles - Document Digital SignatureSigning CDA Documents), but rather simple functional criteria that typical clinical documents (CDA, CCD, Blue-Button) already support. This criteria is linked to re-enforcement of the need for patients to be ‘able' to download a copy of their health information. The Privacy and Security committee didn't try to define the content, but rather simply the functionality of being capable of downloading their health information. In the short term simple data-provenance might be good, but eventually we need strong non-repudiation. 

The third criteria is a rather small one that surely everyone already supports. This criteria is commonly known as ‘inactivity timeout' or ‘auto logoff'. The idea that the system will detect an idle system and somehow prevent the system from further displaying or allowing access to PHI. This is a typical functionality, but is a difficult thing to describe in words in such a way that doesn't specify a specific method.

The fourth criteria is to more fully define security audit logging. This one mostly resurrected the wording that I created as a co-chair in CCHIT back in 2005. That is to define a set of auditable events (right from ATNA), a set of audit attributes (right from ATNA), and a set of audit log management functionality (also right from ATNA and other sources). Thus the criteria should end up looking very much like what CCHIT was testing before. I tried to get IHE ATNA listed as ‘super-compliant', but this was removed by the larger committee. I am confident that IHE ATNA is viewed as compliant, but I don't think that the stage 2 criteria will say this. (How granular does an EHR Security Audit Log need to be?IHE - Privacy and Security Profiles - Audit Trail and Node AuthenticationAccountability using ATNA Audit Controls, and ATNA and Accounting of Disclosures)

The fifth criteria is that systems need to authenticate themselves to other systems on the network. This is the typical system-to-system authentication found in IHE ATNA (e.g. Mutually-Authenticated-TLS). The workgroup tried to focus this criteria to only communications that go across organizational boundaries, so that it would not be applied to internal communications. I am not sure which way HHS/ONC will go on this. (IHE - Privacy and Security Profiles - Audit Trail and Node Authentication,  S/MIME vs TLS -- Two great solutions for different architectures)

The last criteria is to clear up the most confused security criteria from Stage 1. That is to define exactly what is required of encryption of data-at-rest. Many members on the Privacy and Security workgroup expressed that the Stage 1 criteria was hard to understand (Meaningful Use Encryption - passing the tests), and we all expressed that the criteria needs to be very specific about the risk that it is trying to solve. Specifically the EHR vendors on the call were strongly advocating for wording that would encourage software good design. Specifically an EHR design where the end-user system doesn't save PHI onto the hard-drive, whether this is a desktop, laptop, tablet, or other mobile device. We were very unified that this is a good system design that doesn't put PHI at risk of exposure if the system is lost or stolen. Yet if the EHR system does utilize the hard-drive on the end-user system then the EHR system must support encryption of that PHI. Clearly HHS/ONC is very worried about perceptions of the HHS Breach Notification ‘wall of shame', and thus want to provide politically-correct message that tells the general public that they are addressing these breaches. I thus would recommend both: make sure system design avoids risks of exposure, and allows workstations to use transparent hard-drive encryption.

The workgroup did recognize that the functionality of an EHR to export documents (e.g. to give a copy of health information to the patient), is excepted from the workstation encryption criteria; while also recognizing that there is a need for encryption of this exported information. We recognized that IHE has just released the Document Encryption profile which would be a future possibility, likely for Stage 3. Prior to that approach, the Provider Organization is expected to protect this exported PHI through other means, such as transparent encrypting USB memory devices and transparent hard-drive encryption.

There really is not much new this year, mostly providing clarity to previously known security functionality. I see more interest in leveraging existing general purpose IT security functionality standards, such as NIST 800-53.  There is also a recognition that the IHE profiles are a proper solution for interoperability (they don't cover functional or operational security), but there is HHS/ONC hesitancy to specify them out of fear that they drive a specific architecture or specific organizational infrastructure. The workgroup and my interactions with HHS/ONC show that there is a reasonable approach to security functionality as foundational to a high quality EHR.

Direct: Security risk of PHISHING and SPAM

I am trying to find people who have experience with encrypted e-mail and using Directories to publish certificates. Standards are wonderful, but experience is equally important. While talking to people about their experience with publishing digital certificates in LDAP directories, I have to explain the Direct project use. I am explaining this to IT people who run directories or run mail servers. It goes a little like this:
My use-case is for encrypted e-mail. More specifically it is for using encrypted e-mail between hospitals/clinics by doctors. In order to make this happen, one doctor must be able to ‘discover’ the certificate of the other doctor. So I am on a workgroup trying to define how the USA would do this. Specifically we are recommending each hospital would have a limited LDAP directory exposed for this purpose. It only need contain the email address and certificate for each individual they allow to receive encrypted e-mail.
For which my security conscious peer responds:
Nice. Publish everyone’s email address and their public cert. phishing encrypted style. Good luck detecting the phish till it’s *way* too late.
The implied message here is that if the e-mail is encrypted to a certificate owned by an end-user; then the IT at the organization can’t look at the content and reject it because they see PHISHING or SPAM patterns. This is what many mature email servers have been doing to limit the amount of SPAM or PHISHING that end-users see. I know that I receive little spam or phishing email, yet my email address is well known and published in lots of places. I compare with others, and am very happy with the IT support given to me at GE. This is impossible if the IT department can’t look at the message because it is encrypted.

Note that both the DNS-Cert and the LDAP model of publishing Certs would present this problem.

I do have a good answer:
Yup, the risk is known; and managed: The sender must sign the message too, and it must be signed with a certificate that chains to a trusted root (trust root that are managed, NOT like browsers). So, unsigned messages are discarded, or the phish-er will be highly identified by their e-mail signature
This is indeed the solution that we put into the Direct Project risk assessment. The following is from this risk item in the risk assessment as the comments:
  • the judgment of the receiving user can determine if the information should be trusted or not.
  • Many will choose to simply discard all non-secured as potentially SPAM.
I am worried that some will forget this risk, and not treat un-signed email special. If you forget this risk, and you publish your email address in a directory or in DNS; then you will be discovered and targeted by SPAM and PHISHING attacks. If your certificate is there; then the attacker can further encrypt the SPAM or PHISHING attack so that your IT department can’t protect you.

The inbound signature MUST be validated.

Monday, December 19, 2011

IHE Profile grouping

Should Document Content profiles mandate IHE ATNA? The short answer is "No". The long answer is a lesson in understanding IHE 'grouping'.

I am generally against mandatory grouping, they cause more unnecessary discussion than they help. IHE did mandatory grouping for XDS simply because we needed to drive security/privacy, as there would not be trust of an HIE system that is ignorant of security/privacy. In hindsight we might have done this through some other means, but we used the tools that we had at the time. Therefore all XDS actors must be grouped with ATNA secure node/application actor.

However we need to recognize when there is a need to define specific behaviors when grouping happens; regardless of if the grouping was mandated (ala XDS mandating ATNA grouping), or by system design (EHR chooses to implement both XPHR and XDS).

Document Content profiles have recognized that the likelihood of Document Content profiles to be grouped "by system design" with one of the XD* transports (XDS, XCA, XDR, XDM); and therefore the Document Content profile do define how one would derive the XDS Metadata values from the Document Content profile specification of the content. This is an example of a not-mandated grouping, but one that is fully defined.

The Document Content profiles should equally recognize that a likely systems design grouping would be with ATNA. Thus the Document Content profile should define the Security Audit Log Message derivation. This should not be duplicate of the audit log definitions in the export/import transport, but clearly the Document Content profile is being 'created' or 'consumed'; both are security relevant events. This is really not unlike what is done for XDS Metadata. If this is not defined by the Document Content profile, then the system implementer must figure it out themselves (which I would argue they should be able to do).

Some other examples I can think of for Document Content profile likely grouping with SVS, PWP, PIX, PDQ, etc… I am not saying these must be documented, but surely if PCC felt that the grouping was likely (or to be encouraged) that behaviors would be defined as ‘grouping behaviors’. For example specific use of SVS to retrieve a value-set that is used a defined way.

In this way, if someone chooses to make an application that does nothing but create the Document Content as specified, but doesn't choose to design with any IHE defined transport or IHE ATNA; then there is no XDS metadata or ATNA message that is testable; as they are not mandated. The defined things to be tested are driven by the systems design as documented in the system “IHE Integration Statement”.

Wednesday, December 14, 2011

Regulatory coexistence of Direct and Exchange

This would be a fantastic outcome, but making regulation is a messy process. In the blog post "Standards are not Optional" Doug Fridsma talks about "Optionality" in standards including that data standards for HIE building blocks "need to be unambiguous and have very limited (or no) optionality." This seems to tip the hat toward continued mandate for Direct and no recognition for Exchange.

In Fridsma's blog the use of "no optionality" is being applied in a very specific way. John Halamka likes to point out that when alternatives are part of a regulation, the result is that the vendor community must support all alternatives. Thus a this-or-that is actually a this-and-that. Thus an "or" is actually an "and". This is a good lesson to learn, as it really should get policy makers to think about what they are asking for. If they include optionality in regulation, they are actually mandating both. Meaning they are really not providing optional paths.

However optionality is a word that can be used in a different way, that should not be seen as a negative. Such as the "Consolidated CDA" is the basics of a document that must be fully specified a specific way without question; but if (optionality) you have Y or Z information you may (more optionality words) put them in the same document in this specific (not optional encoding) way. This extra information (Y or Z) is optional, but it isn't optional in the same way as is being reference with the OR-means-AND phrase. This extra information (Y or Z) is optional because it may or may-not have been captured or be relevant to the current context. It is optional because it is not minimally necessary for the broad use of the document; but if you have it then it is not optional on how to encode it. This is understood by most who are involved with standards daily, but confusing to those that look at it only once a month. 

Note there is this kind of optionality built into the Direct specification. Inside the Direct specification (see 2.1 Health Content Containers) it indicates that if you can send the Document content inside an XDM zipped formatted package, then you SHALL. Meaning that sending the document without the XDM zipped package is minimally required, but if you can send it with the packaging and metadata defined by IHE XDM then you must do it that way.

I believe that ONC is struggling with how to handle Direct vs Exchange. They had the big struggle between the Powerteam and the community pushback. They really want to push 'either', but know that the OR-means-AND rule; forces even the littlest vendor or organization to implement both. They are not worried about the big guys (big vendors or big organizations). The big guys have money, resources, and IT knowhow. So they struggle with how to mandate ONE, while making sure that the other is operationally acceptable. Given their focus on helping the little guy vs not caring about the big guy; they are more likely to continue to only mandate Direct. There is little question that for the little guy that Direct is the best stepping stone. But as Doug Fridsma points out in his blog:
The Modular Specifications project has identified two ways to transport information and has created more modular, substitutable specifications. Utilizing Direct specifications as the foundation, the project has created a Secure Transport specification based on SMTP and S/MIME and XDR and XDM Conversions. A second approach leverages Exchange specifications as a basis, and a Web services approach has been specified as SOAP over HTTP. From the multiple transport standards, two building blocks are now part of our standards portfolio.
I might point out that they have more in-common that not One Metadata Model - Many Deployment Architectures

I think a reasonable outcome of Stage 2 Meaningful Use is that Exchange be considered as an acceptable standard to receive endorsement and funding. I don't think Exchange will receive anything more than that. ONC does understand that regional health information exchanges did miss-understand their old directive to use "Direct" means "Not Exchange". So to get this message converted to "Direct is minimal, Exchange is acceptable" would be a good outcome. To get Exchange listed as 'preferable' would be extraordinary.

In the mean time, there are plenty of regional Health Information Exchanges, and consortium of very large organizations, going forward with the Exchange specifications. They are doing this because it is the right thing for them to do, and being one of the 'big guys' just proving that they don't need father ONC to tell them what to do. In doing this they are proving the technology, and developing the policies.

Wednesday, December 7, 2011

Patient Identity Matching

IHE has Patient Identity Matching profiles like PIX for inside and across a health information exchange (HIE), and XCPD for across communities (e.g. NwHIN-Exchange). Patient Matching is also known as Multi-Patient-ID (eMPI), which is a system that matches many different patient identities (PID) in a cross-reference. This is where the name comes for the IHE Profile: “Patient Identity cross-reference” (PIX). This is not to be confused with a Master-Patient-Identify, which is a concept where there is a master patient ID that everyone uses. The Cross-Community Patient Demographics (XCPD) Profile is more suited to support of multi-community duty.

The slide at the right is from an upcoming IHE educational webinar on Patient Identity. It shows ALL of the IHE profiles that are relevant as combined into a 'system'. This system happens to combine all the Actors simply for educational purposes, but very reasonable products could do sub-selections of this for specific purposes.

IHE does not define the internal workings of the eMPI system which might implement the IHE PIX manager and/or XCPD responder actors. There is much left not specifically defined by the PIX or XCPD profile. There are several items of interest in the requirements of a PIX manager, specifically demographic matching and the value of globally unique identifiers.

These concepts are often used in combination, such as the XDS concept of an Affinity Domain, which requires a Master Patient identity that is used in XDS as the patient ID. When other entities using a local patient identifier wish to communicate using XDS, a Cross-Reference system is used to map to this Master Patient identity. XCPD operates in environments where no eMPI system or Master Patient Identity exists. Entities use demographic matching to correlate patients with selected partners, as necessary. No central, or authoritative eMPI system enables this process so, to keep correlations up-to-date, repeated matching requests are needed.

Demographics Matching
Usually an eMPI operating environment tries to define the minimal attributes necessary to make a fuzzy-match algorithm function good-enough. Good-enough is a subjective assessment but includes that most of the time a positive match is found, and only very few times does it produce a false match or a false non-match. This tuning of the matching algorithm is the primary function of an eMPI. The downside of using a centralized eMPI service is that it has a database of all of these demographics, and is thus a point of security/privacy threat.

The minimal attributes are often things like First-Name, Last-Name, Date-of-Birth, and Sex. These values are delivered to the Patient ID Manager in the Patient Identity Feed transaction (essentially a basic ADT message). They are then ‘normalized’ to handle things like uppercase vs lowercase; like initials; like spelling differences (e.g. Rich, Richard, Dick). These 4 attributes have been used well beyond the healthcare industry. For example they are used in the gambling world by the ‘house’ to detect repeat offenders. In-fact they use a system that doesn’t store the individuals demographics, but rather a cryptographic value, lowering the risk of disclosure if their database was exposed. This is the trick that John Halamka referred to in #8 of his post on Freeing the Data. It is also used to keep one casino’s clientele list from the other casinos, so there is a strong business requirement.

Change over Time
Recognize that these values, and other values such as their phone number or address, change overtime. This is shown by the figure at the right, which comes from The HIT Standards Summer Camp Patient Matching report in August, 2011. This change overtime can be detected, and when it is detected both the new and old are remembered as equivalence. In this way one can match data that is submitted under the old or new demographics. This does require that the eMPI hold many generations of entries. Identities and demographics changing over time do add complexity, but reality must be recognized.

Because this information changes over time we need to recognize that as well. There should also be a place where the most current set of demographics are. It might not be the 'authoritative' set, but it sure would be good to be using the First or Last name that the patient wants to be addressed by. Which brings up the topic of the longitudinal record. The HIE and Community Exchange are longitudinal, meaning they ultimately will contain many decades of records on any one patient. Throughout this longitudinal record many of the factors will change, even those that are shown above way off the chart to the lower right (meaning they are highly stable). This means that when pulling a record from an HIE of any kind, one must not necessarily expect that the demographics inside that document represent the current or even local understanding of the patients demographics. This doesn't mean the Document should not be interrogated, but when discrepancies are found they should be somewhat expected with possibly only a warning message to the user.

Additional Identifiers
There are many different types of identifiers that a patient can use to uniquely identify themselves. If these identifiers are provided as input to the eMPI they help produce a better positive match. If these identifiers are treated as opaque and fully specified identifiers they don’t require special handling. This is to say that both the identifier and the identifier of the assigning authority are submitted. With a generic system like this the solution supports endless types of identifiers.

For instance, if the patient has a SSN, one enters the SSN with an ‘assigning authority’ for the SSN admin (i.e., 2.16.840.1.113883.4.1). If the patient has their insurance card, you enter that with the insurance admin as the assigning authority. If the patient carries a patient id from another facility, you enter that. It is always a <ID value> + <assigning authority value>; this is just another patient-ID in the context of an eMPI (even when the ID isn’t healthcare specific). It is very important that everyone uses the same values for ‘assigning authority’, so one does not need a ‘value-set’. This is especially true when the assigning authority doesn’t have its own globally unique assigning authority value, or the value is hard to discover.

Universal Health ID
What would be best is if there was some form of universal health ID. This is currently used in other countries such as across Europe in the epSOS multi-country exchange. There is regulations forbidding the USA government from funding an effort to create a universal health ID.  A unique approach to get around this is a neffort to create a digital identity for Medicare beneficiaries. Interesting how they get around the ‘can’t fund a universal ID’ problem by scoping it to Medicare beneficiaries.

A very visible example of a universal ID (that comes with a unique string encoding) is an e-mail address: One can see how this will work with PHRs to create a globally unique patient ID, for example my HealthVault id can be viewed as This is entered simply as another patient-ID, and if it has ever been submitted in the past, it will be there for a positive match. E-mail addresses come with a built in globally unique assigning authority, the second part of the address (e.g. “”). These are globally unique simply because of the internet domain name system.

Another approach to using identifiers to improve patient matching is the Voluntary Universal Healthcare Identifier ( which supports creation and management of patient identifiers that are independent of a particular healthcare provider entity so can be used to match patients in an eMPI.

Note that with any ID the biggest concern is to be sure you have an authoritative ID. We are used to looking at Drivers Licenses or Passports to get an authoritative identifier. We somehow trust that a patient can tell us their SSN (really bad practice due to the well known fraud and identity theft). When it comes to a patient presenting something like an e-mail address, there is a reasonable concern that this information is not authoritative. But clearly, it should be seen as just as authoritative as the SSN. Likely with an e-mail address we can work up mechanisms to prove they are authoritative before we use them, very much like the banking industry and e-mail distribution lists.

Security Considerations
Clearly Security is important for any system that holds sensitive information. The eMPI is a form of a directory, a specific form. When queried using PDQ it looks more like a directory. One must recognize that the patient demographics and identifiers are sensitive (valuable). So the eMPI system must be protect against security risks: Risks to Confidentiality, Risks to Integrity, and Risks to Availability.

Clearly when accepting query requests or information, the eMPI needs to make sure the query request or information is authentic and authorative. This is typically done, and profiled in IHE with ATNA, with a mutual-authentication of the communications. That is that the requesting system can authenticate that they have connected to the correct and authentic eMPI, but also that the eMPI can be assured that the system that has connected to it is authentic and authorized.  This system level authentication is usually enough for an eMPI, especially PIX/PDQ in a HIE. The XCPD profile also supports user assertions using the XUA profile. This allows the XCPD interface to an eMPI to make more fine grain decisions, but more importantly to record in the audit log more fully. This said, recognize that data returned to a system like an EHR is usually totally available to everyone in that EHR.

The eMPI should also be able to protect the different types of attributes that it holds. That is it might consider some attributes more sensitive than others. For example as I showed above the eMPI can authenticate the system that is sending a Query. For some of these systems might be more highly trusted with all the attributes, where other systems would be allowed only access to the healthcare identifiers.

Consent enforcement
There are use-cases where the very knowledge that a consumer has information at a healthcare providing location is considered controlled by privacy policy. This is true of the highly sensitive health topics (e.g. 42 CFR Part 2), but is also true in some states. In these cases there is a need to have the eMPI recognize the current state of patient consent to disclose. That is that the eMPI must not let others know that the patient has an identifier (or data) when the patient has not authorized it. In this case the eMPI acts as if the patient simply doesn't exist.

The HIT Standards Summer Camp covered Patient Matching and produced their report in August, 2011. This report leverages the more detailed report from ONC on Privacy and Security Solutions for Interoperable Health Information Exchange - Perspectives on Patient Matching: Approaches, Findings, and Challenges.

I thank Karen Witting (IBM) for helping produce this article. Karen has extensive knowledge of the Patient Matching domain, acquired during her extensive research to produce the Cross-Community Patient Discovery (XCPD) profile.

Update: Umesh Madan at his Blog "Engineer by Day", does a fantastic job of explaining how Spell-Checkers work. This is very similar to how Patient Demographics are 'fuzzy' matched too.

Tuesday, December 6, 2011

How granular does an EHR Security Audit Log need to be?

The EHR needs to be capable to be as granular as possible on any 'security relevant' event that it controls, but be configurable so that local policy can choose how much logging they desire.

I got this question from one of the IHE mailing lists.  - The answer is not as simple as the above summary, and brings up some gaps in our understanding of security audit log use regarding: academic pure answer, realistic answer, and operational reality.
Initial Question
I'd like some clarification on the auditing requirements for Content Consumer for XPHR.
PCC TF-1(Rev 7), section 4.8.2 says the following:
9. All activity initiated by the application implementing the Content Consumer shall generate the appropriate audit trail messages as specified by the ATNA Profile. The bare minimum requirements of a Content Consumer are that it be able to log views or imports of clinical content.
10. A Content Consumer shall log events for any views of stored clinical content.

Are these statements intended to mandate the auditing of each individual document that is viewed, each time that it is viewed, or is it adequate for the application to log that the user accessed functionality where clinical content can be viewed, without providing specifics?
Once a 'system' takes on the responsibility of the ATNA 'Secure Node' or 'Secure Application' the responsibilities of the IHE ATNA actor is system wide. This means that the system is responsible for creating audit messages for ANY of the security auditable events identified in ATNA (See Table 3.20.6-1. Audit Record trigger events) that it-self mediates. (It is not responsible for events that it is not in control of). This means that the system must be capable of recording, as a ATNA audit message, all the times a user accesses patient data within the control of that system. Not just the accesses to XPHR data. Not just the accesses to XDS. All accesses.

Yes, we know this is a big burden. But once a system is 'trusted' to communicate with other systems, especially in an HIE, it becomes part of a bigger system. If the system can't be trusted to record all security relevant events, then it should not be trusted to communicate at all. The HIE is a massive web of trust. Each system is responsible for controlling and monitoring their own domain, but each system is connected into a larger domain (e.g. XDS Affinity Domain).See my bloginar of the IHE - Privacy and Security Profiles - Introduction .
This created more questions

Thank you for your response John. I do understand the overall objective and goal for auditing via ATNA, however I am still trying to get a better understanding of how granular we need to get to with auditing events in an EHR system, particularly when an event is viewed. The explanation/examples below may seem very detailed but it would help me and our team get a better sense of what is required/your thoughts. Thank you.

In this example the following is true: The system is an EHR, not a PACS/Radiology system; The Radiologist has the proper security and privileges to access any/all x-rays in a patient record that he is assigned to.; and The Radiologist has the proper security and privileges to document findings on any x-ray in a patient record that he is assigned to.

The Radiologist 1) accesses a patient record via the Radiology Application to review a chest x-ray study. Then the radiologist 2) documents his findings. During his dictation he looks at two prior x-ray studies, 3) and 4). Then he actually looks at the first x-ray study a second time.

Does the system have to log like this:
1) “Patient Record Accessed – Chest X-ray study from 9 Nov 2011 @ 09:15 viewed”
2) “Patient Record Updated – Chest X-ray study from 9 Nov 2011 @ 09:15 updated”
3) “Patient Record Accessed - Chest X-ray study from 12 Dec 2010 @ 09:15 viewed”
4) “Patient Record Accessed - Chest X-ray study from 5 Oct 2010 @ 09:15 viewed”
5) “Patient Record Accessed – Chest X-ray study from 9 Nov 2011 @ 09:15 viewed” (2nd time)

Or can it look like this:
1) “Patient Record Accessed – Radiology Application”
2) “Patient Record Updated – Chest X-ray study from 9 Nov 2011 @ 09:15 updated”

Would it be sufficient for the system to log “Patient Record Accessed from Radiology Application” when the Radiologist first accesses the patient record to see the Chest x-ray(s)? The scope of what the radiologist has security and privileges to see in that application is known, and any/all radiology studies are available for him. Is it the intent of the standards to log “eyes on the screen” for each and every radiology study that is viewed?

There are an enormous number of health care events that are seen by clinicians on a daily basis. Just opening the “Medication Administration Record” application in an EHR can let a clinician see dozens of med admin events simultaneously. These med admin events are ATNA auditable “medication” events when they are prescribed, perfected, and delivered. But is it the intent of the standard to literally log each and every med admin event that a clinician viewed? Or does an Audit Log Record such as: “Patient Record Accessed – Medication Administration Record” meet the intent of the standard?
Opening the “Patient Schedule” application in an EHR can let a clinician see dozens of orders and scheduled health care events simultaneously for a patient, everything from an order for discharge, the completion of a bed bath, or a scheduled surgical procedure. Is it the intent of the standard to log each and every view of a health-care-service event that the clinician reviewed from the schedule?

The ATNA auditable “study-used” event may suggest a different audit logging level for a radiology study than for a bed bath or patient teaching episode. But does that event apply to an EHR or to the PACS/Radiology System that owns the study? There are also specific audit record requirements for some special events such as a CCD import (PHI-import), but once the CCD is in the system, there is nothing in the standard that suggests a different level of audit logging for each view of the CCD.

If it is necessary to specifically log every view of every specific radiology study or CCD, then is it also necessary to log every specific view of every administration of a medication, every bed bath, every bedside teaching episode, etc. Each of those events would be considered an ATNA auditable “health-service-event” when it was documented. But there is no specific “health-service-event-used” ATNA trigger event in the standard.
The answer to this question is complex. 

I will address an easy one first. In your example it is not clear if the radiology viewing application is considered part of your EHR 'security boundary', or is outside. I bring up the term security boundary to describe the logical extent of your ATNA "Secure Node" or "Secure Application". Any security relevant event happening inside this boundary needs to be auditable, but any communications outside of this boundary needs to be strongly authenticated. So drawing your boundary is important. If the radiology viewing application is outside this boundary then you are not responsible for the auditible events that happen in that radiology viewing application.

A Security/Privacy office will take what they can get. So, the more you can record in the audit log the better. But there does become a point where the detail becomes noise and the goal of Surveillance is not aided. Remember that the goal of the Security Audit Log is to provide enough information that the Security/Privacy office(s) can determine that the users are following the Policies, thus detecting abuse by legitimate users and malicious factors. Yes part of this is the privacy office producing an Accounting of Disclosures and Access Report. What you are struggling with is a common problem that there is really few good answers available.

In DICOM they have answered this question by saying that the DICOM "Study" is the object of interest. When a Study is accessed, an audit event should be recorded. But for all the information within a study, audit events are unnecessary. It is assumed that if you accessed any part of the Study, you have accessed all of it (from a security perspective, from a medical responsibility we don't look to the security audit log.)

For the use-case you outline, either answer is likely right. You didn’t include the ‘study’ definition, so I can’t tell if they are different studies. Also, looking at something and switching away to something else only to come back to the first; seems more like you never left the first. From a Security/Privacy perspective they want to know what was viewed, not how many times it was viewed (although some might care. And surely efficiency studies might find it useful, but this is not a legitimate use of the security audit log).

A CDA document is an equivalent ‘nice’ sized object. But as you point out a CDA document is often decomposed into the EHR and not really accessed as a document after that. I would view a decomposed CDA as encapsulated into the EHR and now the proper control of the EHR and no-longer in CDA form.

An EHR as a standalone system (aka EMR in terms of ONC) is a very complext problem. One viewpoint that I have heard is that the EHR is seen as one patient centric object. That is one security object per patient uniquely identified. When a doctor selects a patient in the context of the EHR, one event is generated. Thus from the security/privacy perspective one must assume that the user viewed ‘everything’ in the EHR. This is likely bigger than desired, but trying to define a hard boundary smaller than that is simply not possible with the way we understand EHR today.

I will say that even within the context of an EHR; any events to create data, export data, or import data should be considered security events worthy of recording. An import or export event is when data is passed into or out-of the control of the security/privacy system. Assuming that the EHR boundary is defined by the Access Control system; which may not be the logical or physical boundary. This is also involved with your use-case, given that it seems you are calling upon an external viewer, which I assume is another control environment. Much in ATNA is very specific to how one draws the boundary of their ‘system’.

In my opinion, the export event is the most important to be detailed. Again, the level of detail is not clear; but identifiable objects such as CDA documents clearly are individually identifiable.

Another exception to this rule would be when someone (e.g. patient or doctor) puts specific policy limitations on a specific object inside the EHR (or sub-object given the above discussion). That then defines that object as something of interest and thus makes all accesses to that object auditable.

The last point is that you must satisfy the needs of the customer (using organization). So your customers will ultimately define the satisfaction criteria. This is typically done by providing sufficiently many auditable events and configuration controls that allow authorized administrators (security/privacy office) to turn-off some of them (better to have them turn on events, but that is a different issue).

Great question for profession societies like HIMSS to look at…

Monday, December 5, 2011

Document Submission: Audit requirements under error conditions

I got this question through NwHIN-Exchange: When a receiver of a Document Submission request encounters an error, the entire submission is required to be backed out (i.e. the operation is atomic). Is the receiver still required to log audit data in this case? Required not to? Permitted but not required?

Recognize that this is a specific question about the transaction to submit a document. The XDS and XDR transaction: "Provide and Register". In this transaction it defines that the whole transaction must succeed or fail totally. Meaning if any reason causes part of the transaction to fail, the whole transaction must fail. Thus no changes are made if a failure happens.

There are two very different views that could be taken on this question:
a) Since everything is backed out, no changes were made. Thus why log anything.
b) Someone tried to do something, and any attempt to do something needs to be logged.

On (a), this is not a 'security or privacy' view. This would be a view of a "Medical Records" perspective. This doesn't make it wrong, but it does move the motivation. The ATNA audit logging is not for medical records retention reasons, it is for security/privacy surveillance. That is to say that the reason ATNA records events is to have an audit log that can prove that the security/privacy controls are working properly.

On (b), a system needs to have the capability to record the audit log event. The fact that the security-relevant-event is a transaction being rejected vs the same transaction being accepted is simply an attribute values in the audit log message. In the case of the transaction being rejected this is simply setting the fact that it was rejected (EventOutcomeIndicator), and why (EventOutcomeDescription).

Some will wonder if it is useful to record all of these events. This is a different factor totally. The event must be "record-able", what is being questioned is if it always needs to be "recorded". This is a question of "configure-ability"  of the audit system. Classes of audit events might be disabled at the direction of some organizational and operational policy. They might be disabled at the generating system, or might be disabled at the Audit Record Repository (meaning not recorded). But this is a configuration. The system must still be able to generate the audit event.

Wednesday, November 30, 2011

Handling the obligation to prohibit Re-disclosure

There is much discussion lately on a need to communicate along with Patient Data that the Patient Data can’t be re-disclosed. This very specific ‘obligation’ comes up often. This is just one of a set of ‘policies’ or ‘policy fragments’ that need to be discussed when putting together an Organization, HIE, Community, National system (NwHIN-Exchange and Direct Project), or Multi-National System (epSOS).

I think if people were to think through all of the use-cases, there is almost always a need for the obligation to not re-disclose the data that was communicated. It is actually simple data governance regardless of Privacy Policy. One should only publish, or more generically disclose, data that they themselves created. That is not to say that you should not include in your documents fragments or knowledge from previous documents. You should always include relevant evidence, with attribution. This is the topic of ‘Data Provenance’ discussions. This is typically a topic of Medical Records Retention.

So back to the specific obligation to not re-disclose. I would assert that this obligation simply becomes part of the rules-of-the-road, or data-use-and-reciprocal-support-agreement (See NwHIN Exchange DURSA – section 16). That is that this policy simply is elevated to an overarching policy. Thus it doesn’t not need to be encoded in the transaction level. It is already implied through the fact that there is a communications pathway that is acceptable, acceptable because of out-of-band agreements. By not trying to include it at the transaction level, we have a more simple transaction.

We do this for any ‘rule’ that we can. The more we can move into high level policy or governance the better. We are always trying to have simple transactions. This simplicity drive is not because we want to ignore Privacy, Security, Data Governance, or anything else. We strive for simplicity because it is more ‘simple’ to implement and thus more likely to be implemented. Simplicity also a prime factor in robustness.

Update: Based on some conversations... It might be better to think about having a way to let the receiver of data to know that they are explicitly allowed to re-disclose. hmm. That is not quite an obligation. That would be  an allowance-beyond-baseline-rules.

Monday, November 21, 2011

Access Controls: Policies --> Attributes --> Implementation

The IHE Access Control white paper describes through a diagram that how Policies affect the different resource domains (Users, Patients, Data, etc), and ultimately where the Policy Decision Point gets that information when it needs to make a decision. This simple concept is important to understand in order to determine any gaps in implementation or standards.  The following is Figure 14, found on Page 35. This diagram does not propose to show all policies, all domains, or all attribute sources.   But it does show many.

The paper goes on to analyze this deeper and Figure 17 (shown below) shows a different view of the attribute domains. In this diagram we can see the different attributes (little red boxes), grouped into the domains (big grey boxes).

The paper then shows in Figure 24, the classic XACML engine diagram with annotation on where these issues could possibly be satisfied. Clearly this is just one possible solution,  but it is useful to view concrete models sometimes in order to understand the abstractions.

This just touches upon a few concepts from the Access Control Whitepaper. The paper is far more comprehensive than this.

Saturday, November 19, 2011

Calendar of Healthcare Standards Activities

Keith started this Calendar. I am just re-publishing it. I updated it with the dates that I know of. I added HL7, IHE, and ISO TC215. I don't know the DICOM dates. Seems it would be good for all these dates to be known so feel free to offer up more key dates. If you don't have access, you will need to ask Keith. Or just let us know.

Friday, November 18, 2011

Non-Repudiation is a very old art

Updated: The video recording is available here.

During the ONC Annual Meeting I absolutely enjoyed Jay Walker Keynote: “Achieving Big Changes”. A wonderful history of technology starting with a clay device from 2000 BC that was used as a receipt.

I think it is the white cone on the far right in this picture from the Wired article on Jay Walker. This is a cone of pottery with markings on it. The exerpt for this picture identifies it "a truly ancient storage device, a Sumerian clay cone used to record surplus grain."

I really hope that this webinar is available for replay as it is fantastic lesson in where we get much of what we have today. In fact I think that he outlines very well many of the requirements that we still struggle to achieve with modern technology. Jay went on to explain much ancient artifacts role in advancements, artifacts from his own collection.

The clay device that Jay showed is not unlike the ‘Cuneiform tablet’ shown at the Metropolitian Museum of Art. The ‘technology’ at the time allowed for the creation of a receipt that was not possible for the holder to modify, or at least any changes would be obvious. The use-case, as Jay explains, was to record the amount of surplus grain that a farmer had deposited so that later the farmer could get back the grain or money it was worth. The receipt writer would make marks on clay, fire the clay to make it permanent, and give the receipt holder the hardened-pottery. Later the farmer could present the pottery and the merchant would be able to tell that it is legitimate and not changed.

So these 4000 year old devices are ‘the’ earliest samples of non-repudiation, the key characteristic that people look to for electronic signatures (Digital Signatures). Jay pointed out that not only are these some of the oldest examples of writing, but also non-repudiation. This shows how 'need' and 'value' drove the invention of these pottery based receipts. This same concept we look for in electronic signatures; of being something hard to create, hard to falsify, and verifiable. It also shows that the technology scales with the value it is protecting.

Thursday, November 17, 2011

IHE N.A. Connectathon Conference - January 11, 2012

 This just crossed my desk:

Subject: IHE N.A. Connectathon Conference | Save the date! January 11, 2012 

IHE North American Connectathon Conference 2012
Save the Date: January 11, 2012 in Chicago, IL.
Registration will open soon.

Joining a preeminent cadre of interoperability, information standards, IHE and health information exchange experts for a one-day educational and networking event at the IHE North American Connectathon Conference, January 11, 2012 at the Hyatt Regency in downtown Chicago, IL.

The IHE Connectathon Conference is a cornerstone of the annual IHE North American Connectathon and has will hit record breaking participation this year! Over 120 organizations are participating this year that will test 150+ systems. Attendees at the Connectathon Conference will be given special access to the testing floor and a guided tour of the event. 

IHE USA is proud to announce an exciting and dynamic array of speakers and educational sessions for this year’s Conference. Please join us for this important event. Additional information regarding IHE N.A. Connectathon, plus the Conference dates and location are listed below. If you have other questions or need more information please contact us at

·         Opening Keynote - Delivering High-value Health Care through Regional Health Information Exchange                                                                                                                                               
Eric Heflin, Chief Technology Officer, Texas Health Services Authority
·         Leveraging IHE XDS to Achieve Health Information Exchange - Real World Implementations                                                                                                                                  
Holly Miller MD, MBA, FHIMSS, CMO, Med Allies    
Jim Younkin, IT Program Director, Geisinger Health System, KeyHIE    
·         Current Advancements in Medical Device Integration
Elliot B. Sloane, PhD, CCE, FHIMSS
Professor and Director of Health Systems Engineering
Drexel University School of Biomedical Engineering
·         Exploring Open Source Tools to Achieve Interoperability - Panel Discussion                                          
James St. Clair, CISM, PMP, SSGB, Senior Director, Interoperability and Standards, HIMSS                                                                                   
Rob Kolodner MD, EVP, CIO, Open Health Tools                                                                               
Ken Rubin, Object Management Group
·         The Next Revolution in Standards-Based Image Sharing                                                                           
David Mendelson MD, FACR, Chief of Clinical Informatics, Mount Sinai Medical Center
·         IHE North American Connectathon Introduction and Guided Tours

Conference Dates & Logistics
The IHE N.A. Connectathon Conference is open to the public and we encourage IHE members to invite interested organizations and individuals that want to learn more about IHE.
Date: Tuesday, January 18, 2011
Educational Sessions: 9:00 – 4:30pm CT
Cocktail Reception: 4:30 – 6:00pm CT
Meeting Location & Hotel Accommodations:
Hyatt Regency - Chicago, IL.
151 East Wacker Drive
Chicago, IL 60601
Hotel Reservations: Click here.
Registration fee: $195.00

IHE Connectathons
IHE Connectathons are held in locations worldwide. This year at the IHE North American Connectathon 2012 there are over 120 healthcare IT organizations registered to test 150+ systems at this year's event. Conference attendees will learn about the IHE testing process, the IHE Profiles that are its foundation and IHE's support for critical improvements in healthcare.
Visit the official IHE Connectathon webpage for more information >>
If you have additional questions, please contact

IHE work items for 2012

The IHE IT Infrastructure Technical Committee met this week to determine the work items that they will work on over the 2012 development season. They started with the output of the IHE IT Infrastructure Planning Committee selection of work item proposals. The short answer is that they agreed to take on all of the work item proposals, with a few scope reductions.

The work items for 2012 are:
  1. Completion of the De-Identification cookbook – This is instructions to other IHE domains on how to create profiles that use anonymization and pseudonymization tools. I am co-editor so, I will be very busy working on this, hopeful that we complete it early next year.
  2. Critical and Important Results – This is a white paper proposal to expand on the need to notify someone when something critical or important is uncovered. The idea is that when something critical or important is discovered, one needs to discover who should be told about this information and how should they be told. This seems to me to be something similar to how we expect PWP or HPD to be used, but with more deterministic results. 
  3. Configuration Management for Small Devices – This is a white paper effort to explore the area of configuration management in a very broad way. The expectation is that this white paper could point at common solutions from general IT (like LDAP, DHCP, DNS) for some problems that are not healthcare specific, while identifying gaps that are specific to Healthcare. These gaps could then be proposed as work items next year. This work needs to be further broken up into two phases so that we can focus on phase 1 to assure we don't have too much work to do, and yet can produce something useful to the community.
  4. Fix XD* Technical Framework – This a project to fixup the current documentation around the XD* family of profiles. There are a well-known list of things that people who come at IHE for the first time can’t find. These things tend to be things that the long standing members simply assume are documented. This realization comes through Bill’s experience with the Connectathon test tools development and assisting individuals with their development efforts. This item seems to always be outstanding, but we must take it on as a top priority and get it done, well better. The result MUST not change any normative meaning. This is simply reforming the documentation so that it is more readable and understandable.
  5. Document Access for mHealth – This is the project that Keith (and I) submitted. The proposal today is mostly identifying the constrained environment that is most prevalent on mobile devices (phones, tablets, etc) but which exists in other places. This constrained environment has troubles with the SOAP stack used in the XDS/XCA environment, and also find the ebRIM encoded metadata harder to manipulate. The proposal is thus to come up with an interface (SOA like) that an organization can offer to their users that is more attractive to these developers and thus will drive for Apps that might be more reusable across organizations.
  6. Patient Encounter Tracking Query – This is a profile proposal to address the need to have a system where actors that know where a patient is can record the location, so that others that want to find the patient can discover their location. This might be automated with things like RFID, might be automated through registration desk activities, or might be manual. The profile proposal looks to leverage the PAM and PDQ profiles. This one really needs a english speaking editor and mentor. If we don't find one by December, this work item needs to be suspended.
Feel free to "Ask me a Question" if you want to know more.

Friday, November 11, 2011

XDS/XCA testing of Vocabulary Enforcement

I was asked what vocabulary should be used to test to determine if system that has implemented XDS and/or XCA actors are compliant. The problem with the question is that it focuses on the technical specifications and ignores reality of how these standards are to be used over time. The technical specification does require a Registry to reject new document submissions with metadata entries having values not on the Affinity Domain approved list. So it seems logical to test that a Registry will enforce this code-set behavioral.

In general I point to the Robustness Principle is also known as Postel's law, after Internet pioneer Jon Postel - "be conservative in what you do, be liberal in what you accept from others" . I would add to this Robustness Principle that you must apply the rules to the context of the situation/transaction. Let me explain:

In an XDS operational environment, such as a state RHIO like Conneticut, there will be a code-set that defines the acceptable vocabulary values for any of the metadata entries. Today this is logically the vocabularies found in HITSP-C80. But the operational code-set is a living list, it will change over time. This is already happening in S&I Framework (the new HITSP). This will happen in any implementation as there is always new documents being defined, and thus new vocabulary being needed. It is this change over time that is potentially not obvious when focused on the specifications and the here-and-now. More important is that a change overtime can't invalidate historically approved documents.

Publication of New Documents
In the case of an XDS Registry, there is a usefulness of the Registry warning a publisher that they are publishing a new data entry with metadata entries that have not-approved vocabulary. This is helpful as it allows the publisher to change the metadata values to acceptable values and try again. This typically involves using a different document template, an updated document template. Publication does need to be specific to the currently accepted list of documents that should be allowed to be published.

Interesting twist is that the publication may want to be more restrictive than the overall acceptable. For example a document that was acceptable in the past may be found as unacceptable in the future. This doesn't mean that the old documents are bad, but that there is a better new way to document the same information. This has not been discussed in IHE, as it is really a sample of a policy statement that an operational environment can make. A decision that doesn't affect the interoperability specification.

Query into the longitudinal Record
XDS Query likely needs to be more generous as this is a probe into the longitudinal record. A probe back in time, to a time when a different set of documents were considered acceptable, documents that would be considered today to be incomplete. So, it seems to me that a XDS query should be allowed to ask for anything; and an XDS Query results needs to be allowed to respond with anything in the longitudinal record. If a new Document Consumer can't understand the old documents, then it needs to be robust to this possibility. The Document Consumer likely does need to notify the user that there are documents that it can't process. This because there might be a safety concern if data is transparently dropped. Hopefully all Document Consumers would try really hard to support anything that could possibly be in the longitudinal record.

HIE merge
A disjointed version of this is the use-case where a system publishes a document into their local HIE in a way that is fully compliant with that HIE; then later that HIE joining a larger HIE. Should those historic documents be not available across the larger HIE? Surely they should not be forced to be republished under the new HIE rules. This similar situation will also happen over time, as 20 years from now we will have a very different view of what is logical for publication, while we must accept all 20 years of data as legitimate.

XCA (e.g. epSOS, NwHIN-Exchange)
An XCA (NwHIN-Exchange) environment is just the Query and Retrieve side, so it should be treated according to the XDS Query comment above.

Wednesday, November 9, 2011

IEC 80001 Step-by-Step Webinar

Update: The recording is now available

Join us for an informative Step by Step Risk Management Webinar designed to provide additional guidance for Responsible Organizations just beginning to implement the risk management process, as part of IEC 80001-1. 

This session will provide step-by-step information for organizations just beginning to implement the risk management process required by IEC 80001-1. It will provide guidance in the form of a study of risk management terms, risk management steps, an explanation of each step, step-by-step examples, templates, and lists of hazards and causes to consider.

IEC 80001 - Step by Step Risk Management
Wednesday, November 16th
2:00 p.m. (EST)Presented by Karen Delvecchio

Karen Delvecchio manages the Networks Engineering team for Patient Care Solutions in GE Healthcare, developing network infrastructure and networked-client capabilities with emphasis on risk management. As a member of JWG7, the committee responsible for IEC 80001-1, Karen was very involved in the development of the standard and related Technical Reports.

Tuesday, November 8, 2011

OCR Launches Privacy and Security Audits

This just crossed my desk

From: OCR HIPAA Privacy Rule information distribution [mailto:OCR-PRIVACY-LIST@LIST.NIH.GOV] On Behalf Of OS OCR PrivacyList, OCR (HHS/OS)
Sent: Tuesday, November 08, 2011 8:39 AM
Subject: OCR Launches Privacy and Security Audits

November 8, 2011

The American Recovery and Reinvestment Act of 2009, in Section 13411 of the HITECH Act, requires HHS to provide for periodic audits to ensure covered entities and business associates are complying with the HIPAA Privacy and Security Rules and Breach Notification standards.  To implement this mandate, OCR is piloting a program to perform up to 150 audits of covered entities to assess privacy and security compliance.   Audits conducted during the pilot phase will begin in November 2011 and conclude by December 2012.

More information regarding OCR’s Pilot Audit Program is available on the OCR website at