|
Discussions of Interoperability Exchange, Privacy, and Security in Healthcare by John Moehrke - CyberPrivacy. Topics: Health Information Exchange, Document Exchange XDS/XCA/MHD, mHealth, Meaningful Use, Direct, Patient Identity, Provider Directories, FHIR, Consent, Access Control, Audit Control, Accounting of Disclosures, Identity, Authorization, Authentication, Encryption, Digital Signatures, Transport/Media Security, De-Identification, Pseudonymization, Anonymization, and Blockchain.
Pages
▼
Friday, March 30, 2012
FW: IHE Radiology Technical Framework Supplements Published for Public Comment
Complexity
HIE solution that is Just as complex as it needs to be and no more complex. (analogous to Occam’s razor)
I have had multiple discussions this week around how complex this or that HIE standard is. This usually comes back to the statements from the HIT Standards NwHIN Power Team evaluation of Direct vs Exchange. In their recommendation they indicate that Exchange was complex. It is amazing how these things keep coming up. I argue that they have two different goals that have overlap. The solution to this overlap is logical progression, not totally different. Thus, we should not be looking to choose one or the other; but rather choose both and apply them to the use-case that they target.
Page count:Given that our government continues to back projects like HITSP and S&I Framework that consider Page Count an important aspect, let’s look at the page count between Direct and Exchange.
Direct Project 89 Pages
Reality
Yes you caught me… I didn’t include the page counts of the IHE specifications, OASIS specifications, W3C specifications, or IETF specifications that they both reference. It is clear page count is a hard thing to figure out. I would argue too that complexity is also a hard thing to figure out. The only reason why e-mail seems easy today is because the last 20 years have worked out the kinks. In the 80s wrote an SMTP system for DOS, it ran as a TSR. That was not easy to do, but I will admit that anything that could run as a DOS TSR must be pretty simple. Well it didn’t support all the protocols we include today in the simple term ‘e-mail’, and didn’t support S/MIME at all.
There is far more similarity in technology between the two. Where Direct uses MIME, this is very similar to the Exchange use of SOAP carrying multiple parts. One can easily argue that the Direct use of S/MIME to secure the communications is far harder than mutually-authenticated-TLS; yet both rely on X.509 Digital Certificates to prove identity and authentication. I would actually argue that none of this matter at all as these are off-the-shelf libraries that are not specific to healthcare. Even in the healthcare space: Where Direct includes XDM and XDR, Exchange uses XCA and XDR. Much of the complexity of XD* shows up in both, just different modes.
In both cases there are Open-Source reference implementations. Actually for Direct there is only ONE that I know of, whereas Exchange has 2 or more (NIST, Open Health Tools). See: http://wiki.ihe.net/index.php?title=Implementation
complex or neededSo, yes the NwHIN-Exchange specifications are harder, significantly harder. They are more complex because they are trying to achieve more than simple push. This is not in any way to say that Direct isn’t what it should be, it was designed to be a simple push replacement for FAX. What angers me is that blanket arguments of complexity are being used to indicate that Exchange is bad.
The NwHIN-Exchange provides in addition to what Direct can do:
References
I have had multiple discussions this week around how complex this or that HIE standard is. This usually comes back to the statements from the HIT Standards NwHIN Power Team evaluation of Direct vs Exchange. In their recommendation they indicate that Exchange was complex. It is amazing how these things keep coming up. I argue that they have two different goals that have overlap. The solution to this overlap is logical progression, not totally different. Thus, we should not be looking to choose one or the other; but rather choose both and apply them to the use-case that they target.
Page count:Given that our government continues to back projects like HITSP and S&I Framework that consider Page Count an important aspect, let’s look at the page count between Direct and Exchange.
Direct Project 89 Pages
- Applicability Statement for Secure Health Transport 20 pages
- XDR and XDM for Direct Messaging 23 pages
- The Direct Project Overview 14 pages
- Direct Project Overview Presentation 15 slides
- Direct Project Security Overview 7 pages
- Deployment Models 10 pages
- Authorization Framework 23 pages
- Document Submission 22 pages
- Messaging Platform 19 pages
- Patient Discovery 23 pages
- Query for Documents 20 pages
- Retrieve Documents 15 pages
- Web Services Registry 11 pages
Reality
Yes you caught me… I didn’t include the page counts of the IHE specifications, OASIS specifications, W3C specifications, or IETF specifications that they both reference. It is clear page count is a hard thing to figure out. I would argue too that complexity is also a hard thing to figure out. The only reason why e-mail seems easy today is because the last 20 years have worked out the kinks. In the 80s wrote an SMTP system for DOS, it ran as a TSR. That was not easy to do, but I will admit that anything that could run as a DOS TSR must be pretty simple. Well it didn’t support all the protocols we include today in the simple term ‘e-mail’, and didn’t support S/MIME at all.
There is far more similarity in technology between the two. Where Direct uses MIME, this is very similar to the Exchange use of SOAP carrying multiple parts. One can easily argue that the Direct use of S/MIME to secure the communications is far harder than mutually-authenticated-TLS; yet both rely on X.509 Digital Certificates to prove identity and authentication. I would actually argue that none of this matter at all as these are off-the-shelf libraries that are not specific to healthcare. Even in the healthcare space: Where Direct includes XDM and XDR, Exchange uses XCA and XDR. Much of the complexity of XD* shows up in both, just different modes.
In both cases there are Open-Source reference implementations. Actually for Direct there is only ONE that I know of, whereas Exchange has 2 or more (NIST, Open Health Tools). See: http://wiki.ihe.net/index.php?title=Implementation
complex or neededSo, yes the NwHIN-Exchange specifications are harder, significantly harder. They are more complex because they are trying to achieve more than simple push. This is not in any way to say that Direct isn’t what it should be, it was designed to be a simple push replacement for FAX. What angers me is that blanket arguments of complexity are being used to indicate that Exchange is bad.
The NwHIN-Exchange provides in addition to what Direct can do:
- Service Endpoint Configuration Discovery
- Patient Identity discovery
- Patient data location discovery
- Patient data query, when the data is needed
- Pull of documents, when the data is needed
- Security model that supports federated identity and layers
- Privacy model that supports confidentiality classifications and consents
- Metadata that is queryable, yet independent of the document format
- type of document (clinical type, format type, mime type)
- provenance (author, role, specialty, institution, type)
- the patient identity
- tags the privacy/security classification
- integrity protection independent of transport
- relationships between documents (predecessor, successor, signs, transform, amendment, etc)
- date ranges of the healthcare information
- Support for Digital Signatures
- Platform for multi-organizational workflows
- Deployment models for XDS or other HIE architecture
The complexity is really needed. In order to support the above capabilities we need to define a Metadata model that is comprehensive enough without being tied to a specific document type, or being overly descriptive of the healthcare condition. This is a difficult tradeoff but I think XDS* got it right, and have it defined in a way that local policy can choose to be expressive or conservative. In the absence of a National Patient ID, we are forced to do all kinds of tricks to discover where a patient's data might be in a way that doesn't expose that patient unnecessary and has enough controls to allow a really high quality match. See: NwHIN-Exchange use of XCPD. In order to support a privacy and security model that can handle patient consent, yet also handle the fact that this exchange is between competing healthcare organizations, IHE called upon the power of SOAP, SAML, and TLS. Yes these are not a simple as REST, OpenID, and HTTPS; but the additional capabilities are needed in the backbone. This is not inconsistent with mHealth use of REST, OpenID, and HTTPS. There are more...
I am involved in S&I Framework – Data Segmentation for Privacy workgroup. This is not a simple topic, but it is made simple by the fact that IHE considered these use-cases when making that ‘complex’ XDS profile. The thing is that IHE didn’t even consider these things complex, they were very clearly needed given the use-case needs that were brought before us. This long term, yet realistic term, view has paid off. The XD* profiles could have been far more complex. Take a look at all that is in the OASIS ebXML Registry specification, really great stuff that we simply don’t need… yet.
Conclusion
Getting to some goal requires stepping stones. I do think that Direct is an appropriate stepping stone, I think the next one is XDS for regional exchanges, XCA for federation of regional exchanges. Eventually we might get to the attribute level exchanges defined in the PCAST report.
- What is the benefit of an HIE
- HIE using IHE
- One Metadata Model - Many Deployment Architectures
- Critical aspects of Documents vs Messages or Elements
- Using both Document Encryption and Document Signature
- Document Encryption
- XDS/XCA testing of Vocabulary Enforcement
- Where in the World is CDA and XDS?
- Universal Health ID -- Enable Privacy
- HIE/HIO Governance, Policies, and Consents
- Patient Identity Matching
- The Basics of Cross-Community Patient Discovery (XCPD)
- NwHIN-Exchange use of XCPD for Patient Discovery
- IHE - Privacy and Security Profiles - Cross-Enterprise User Assertion
- Healthcare use of Identity Federation
- Federated ID is not a universal ID
- Simple and Effective HIE Consent
- IHE - Privacy and Security Profiles - Basic Patient Privacy Consents
Thursday, March 29, 2012
Meaningful Use Stage 2 :: SHA-1 vs SHA-2
The Meaningful Use Stage 2 Certification NPRM asks about the use of SHA-1 vs SHA-2.
My short answer is: I agree with the current Meaningful Use Stage 2 decision to stick with SHA-1
My short answer is: I agree with the current Meaningful Use Stage 2 decision to stick with SHA-1
Although the life of SHA-1 is shorter than SHA-2; the expected use of the hashing algorithm in Meaningful Use Stage 2 for EHR technology is sufficiently covered by SHA-1.
The longer details:The specific text in the preamble is well written, and captures the specific concerns.
Second: The uses of a SHA-1 algorithm that the Meaningful Use Stage 2 is calling for are sufficiently covered by the use of SHA-1. A deployment today of EHR technology using only SHA-1 will still be quite safe for a decade or more. This is mostly because the uses of SHA-1 that are being called for in Meaningful Use Stage 2 are of the type that are not going to be weakened.
Specifically the use of SHA-1 is in things like the “§170.202 transports”, yes it is in the transports today to use SHA-1 everywhere. The point I have here is that the use of SHA-1 in the transports is to prove that the communication did not get changed during the transfer. These transfers take minutes in most cases, might take days for the Direct transport (secure e-mail). The point is that the receiver checks the integrity right away, so there is no opportunity to falsify the integrity checks.
The second place where SHA-1 would be used is in the secure messaging with patients. This will either use the Direct project (see above), or far more likely will use a Web Portal. With the requirements to secure the communications this Web Portal will use common HTTPS, just like we use with many sensitive web sites like banking. HTTPS commonly uses SHA-1, so it meets the criteria. Like the “transports” discussion, the integrity checks is done right away, so no opportunity to falsify the communications.
Third:They point out that there is concern in the cryptographic community that SHA-1 is potentially not strong enough for digital signatures. This point is well said. The problem is that all cryptographic algorithms eventually succumb to either a vulnerability that is discovered or simply Moore’s-law of computer advancement. The point is also very specific to digital signatures. These two points are important to keep together. It is only when one has something like a Digital Signature that this concern should become important. A Digital Signature is something that needs to last for dozens or more years. A Digital Signature would be applied to documents where there is high value in proving that they had been signed by someone for a specific reason. It is only with a Digital Signature that one has a high enough value to falsification to leverage the vulnerabilities in SHA-1.
But, there is no call for Digital Signatures in Meaningful Use Stage 2. I might predict that by Meaningful Use Stage 3 we might have legitimate use of Digital Signatures for workflows like: Confirming provenance integrity on documents communicated through the patients PHR, Proving signing authority for prescription drugs including narcotics, and legal agreements such as Patient Privacy Consents. But today these are not called for, and I think it is right that they are not yet called for. I would encourage the use of Digital Signatures for these, but the value doesn’t overcome the cost today; especially not enough to mandate that everyone in the USA use them.
The longer details:The specific text in the preamble is well written, and captures the specific concerns.
The certification criterion at § 170.314(d)(8) is consistent with the recommendation and recommended certification criterion by the HITSC for the 2014 Edition EHR certification criteria. The capability to detect changes to an audit log has been removed from this proposed certification criterion and added to the proposed certification criterion for “auditable events and tamper resistance” at § 170.314(d)(2). The adopted certification criterion at § 170.304(b) specifies that EHR technology must be able to create a message digest in accordance with the standard specified at § 170.210( c). The adopted standard is: “A hashing algorithm with a security strength equal to or greater than SHA-1 (Secure Hash Algorithm (SHA-1))…must be used to verify that electronic health information has not been altered.” After consultation with NIST, we understand that the strength of a hash function in digital signature applications is limited by the length of the message digest and that in a growing number of circumstances the message digest for SHA-1 is too short for secure digital signatures (SHA-2 produces a 256-bit message digest that is expected to remain secure for a long period of time). We also understand that certain operating systems and applications upon which EHR technology may rely use SHA-1 and do not or cannot support SHA-2 at the present time. Thus, we request public comment on whether we should leave the standard as it currently reads or replace SHA-1 with SHA-2.Some Regulation References:
- §170.210(c) - A hashing algorithm with a security strength equal to or greater than SHA-1 (Secure Hash Algorithm (SHA-1) as specified by the National Institute of Standards and Technology (NIST) in FIPS PUB 180-3 (October, 2008) must be used to verify that electronic health information has not been altered
- §170.314(d)(8) Integrity. (i) Create a message digest in accordance with the standard specified in § 170.210(c). (ii) Verify in accordance with the standard specified in § 170.210(c) upon receipt of electronically exchanged health information that such information has not been altered.
Second: The uses of a SHA-1 algorithm that the Meaningful Use Stage 2 is calling for are sufficiently covered by the use of SHA-1. A deployment today of EHR technology using only SHA-1 will still be quite safe for a decade or more. This is mostly because the uses of SHA-1 that are being called for in Meaningful Use Stage 2 are of the type that are not going to be weakened.
Specifically the use of SHA-1 is in things like the “§170.202 transports”, yes it is in the transports today to use SHA-1 everywhere. The point I have here is that the use of SHA-1 in the transports is to prove that the communication did not get changed during the transfer. These transfers take minutes in most cases, might take days for the Direct transport (secure e-mail). The point is that the receiver checks the integrity right away, so there is no opportunity to falsify the integrity checks.
The second place where SHA-1 would be used is in the secure messaging with patients. This will either use the Direct project (see above), or far more likely will use a Web Portal. With the requirements to secure the communications this Web Portal will use common HTTPS, just like we use with many sensitive web sites like banking. HTTPS commonly uses SHA-1, so it meets the criteria. Like the “transports” discussion, the integrity checks is done right away, so no opportunity to falsify the communications.
Third:They point out that there is concern in the cryptographic community that SHA-1 is potentially not strong enough for digital signatures. This point is well said. The problem is that all cryptographic algorithms eventually succumb to either a vulnerability that is discovered or simply Moore’s-law of computer advancement. The point is also very specific to digital signatures. These two points are important to keep together. It is only when one has something like a Digital Signature that this concern should become important. A Digital Signature is something that needs to last for dozens or more years. A Digital Signature would be applied to documents where there is high value in proving that they had been signed by someone for a specific reason. It is only with a Digital Signature that one has a high enough value to falsification to leverage the vulnerabilities in SHA-1.
But, there is no call for Digital Signatures in Meaningful Use Stage 2. I might predict that by Meaningful Use Stage 3 we might have legitimate use of Digital Signatures for workflows like: Confirming provenance integrity on documents communicated through the patients PHR, Proving signing authority for prescription drugs including narcotics, and legal agreements such as Patient Privacy Consents. But today these are not called for, and I think it is right that they are not yet called for. I would encourage the use of Digital Signatures for these, but the value doesn’t overcome the cost today; especially not enough to mandate that everyone in the USA use them.
The most real of these is already covered by the DEA rule on Electronic Prescription of Controlled Substances.
Forth:
The last factor is that SHA-1 is commonly available in operating systems and tools today, SHA-2 requires technology changes that would cost Healthcare industry greatly. I have covered this in the past around the earlier proposed mandates for SHA-2. These conditions have changed, but Healthcare is still a slow moving industry.
Conclusion:If anything they could change § 170.210(c) from being specific to SHA-1, and be consistent with how they handled Encryption. Simply state that the Integrity algorithm needs to be one specified in FIPS 180-3. This would be effectively the same as we have today, but puts the algorithm specification in the hands of FIPS where it should be.
Update: added the Forth factor.
Forth:
The last factor is that SHA-1 is commonly available in operating systems and tools today, SHA-2 requires technology changes that would cost Healthcare industry greatly. I have covered this in the past around the earlier proposed mandates for SHA-2. These conditions have changed, but Healthcare is still a slow moving industry.
Conclusion:If anything they could change § 170.210(c) from being specific to SHA-1, and be consistent with how they handled Encryption. Simply state that the Integrity algorithm needs to be one specified in FIPS 180-3. This would be effectively the same as we have today, but puts the algorithm specification in the hands of FIPS where it should be.
Update: added the Forth factor.
Monday, March 26, 2012
Published White Paper from the Health IT Symposium
The summary report of the Health Information Technology Symposium of August 2011 is now published on their site. Their 5 observations:
I worry that these visions are being thought up in a way that would benefit the visionary’s own personal life. Yes they are thinking of them-selves as a patient. What they are not thinking of is that they are far more mobile than the average person. Most people stick relatively close to the same place. Yes they have emergencies while away. Yes they move around the country chasing work or loved ones. Yes they need specialist care outside their home area. But these movements don’t require the fine-grained data manipulation being called for in PCAST. These use-cases can move the data to the organization currently treating and process it there. Yes we need fine-grained access, just not through a central database or even federated query. Consistent API locally or regionally would work for most patients.
I think it would be useful for these visionaries to think about a design for a global system. That then puts their personal experience closer to the personal experience of the average citizen relative to USA movements. In fact, I would like to see the USA learn from experiences elsewhere. I saw no mention of experiences that other countries have learned, both good and bad.
So, why doesn’t each healthcare provider have an API that allows the ‘plug-and-play’ that the report and symposium call for? Because each organization has invented their own database schema and user-interface; customized for their needs. This customization is a result of creativity, the solution proposed tells them to be less creative, offering to them that later they can be more creative. This is like telling a 2 year-old to eat their broccoli so that they can later get cake; it never goes over well especially with brats who have been able to do whatever they want up to that point. We need a well thought out plan to make small and positive course corrections. Each step must be better, with clear ROI.
OVERALL KEY IDEAS WERE TO IDENTIFY THE FOLLOWING:This is about discussing a strategy for the overall principles identified in the PCAST report. They still don’t address how we move from a highly-decentralized and highly-competitive environment to this nirvana where all data is available to everyone in just the right form for their use. This end-goal sounds wonderful, but there needs to be some reasonable steps along the way that each provide more benefit than the cost of the change, yet each is small enough to be executed by thousands of healthcare provider organizations, thousands of payers, and thousands of organizations looking for other benefit. This seems to me to be the failing of these visionary utopian positions. I offer stepping stones to this vision...
1. Major issues associated with trust, privacy and security, governance, standards, patient matching, and patient choice;
2. Methodology to incentivize for healthcare quality and efficiency;
3. “Other” tools that can be used to facilitate quality and efficiency in health care. (i.e. open source, semantic, Health Information Technology Innovation and Development Environment (HITIDE), Data Element Access System (DEAS), etc.);
4. Mechanism to develop a national-level program that will assist with development of standards across governments (federal and state) and other public-private partnerships; and
5. Methodology to address a “culture of excellence” focusing on the patient. The hope is that the appropriate organization (public or private) adopts these ideas and continues to improve the efforts and initiative associated with health care within the United States.
I worry that these visions are being thought up in a way that would benefit the visionary’s own personal life. Yes they are thinking of them-selves as a patient. What they are not thinking of is that they are far more mobile than the average person. Most people stick relatively close to the same place. Yes they have emergencies while away. Yes they move around the country chasing work or loved ones. Yes they need specialist care outside their home area. But these movements don’t require the fine-grained data manipulation being called for in PCAST. These use-cases can move the data to the organization currently treating and process it there. Yes we need fine-grained access, just not through a central database or even federated query. Consistent API locally or regionally would work for most patients.
I think it would be useful for these visionaries to think about a design for a global system. That then puts their personal experience closer to the personal experience of the average citizen relative to USA movements. In fact, I would like to see the USA learn from experiences elsewhere. I saw no mention of experiences that other countries have learned, both good and bad.
So, why doesn’t each healthcare provider have an API that allows the ‘plug-and-play’ that the report and symposium call for? Because each organization has invented their own database schema and user-interface; customized for their needs. This customization is a result of creativity, the solution proposed tells them to be less creative, offering to them that later they can be more creative. This is like telling a 2 year-old to eat their broccoli so that they can later get cake; it never goes over well especially with brats who have been able to do whatever they want up to that point. We need a well thought out plan to make small and positive course corrections. Each step must be better, with clear ROI.
Something that I never saw said in the paper or any of the other discussion around PCAST is how all of this Health IT would affect “Malpractice” (liability), both positive changes and negative changes. I suspect that this unstated issue is at the center of much of the reluctance to change. Providing access to the data allows second-opinions, these second-opinions might be truly finding a mistake or they might be drilling simply for some basis to file legal action. Today we put much faith, trust, and worship into the position of Doctor.
I would like to promulgate that the Document exchange model in XD* is consistent with the PCAST report. That the use of a big object, Document, is an evolutionary tradeoff – a stepping stone. We start today with documents, that are easier to define, manipulate, and control. This does not mean that we don’t ever have an exchange that allows manipulation at the attribute level, who knows what the future holds. Where the DEAS is the XDS Document-Stored-Query transaction. The only difference is that XDS is document based. We choose an object size that is reasonable today. I laid this out for Privacy and Security, but not for the whole system http://healthcaresecprivacy.blogspot.com/2011/03/trust-and-pcast-model.html This seems to be what they are saying on Page 8, and seems to be part of the conclusion on page 14, and in a few of the sessions.
The discussion of “who is best positioned” found on pages 21-22 is interesting. It is interesting to see the criticism of HITSP being long and complex documents, while today the S&I Framework is doing that all over again. I was asked to review just the use-case document for Data Segmentation for Privacy, I couldn't get through the 68 pages. This is just the use-cases, 68 pages. I just gave up and gave in. Is there no way to learn from our mistakes? Each of these organizations are made up to show success through page count, not change.
The best quote I found in the document is from Page 23 on barriers to adopting Health IT “On page 60 of the PCAST report, it mentioned that it would cost between $20 and $40 million to develop standards for Interoperability. Overall, the participants of this session thought this cost range was a guess and therefore agreed that the PCAST report did not do a good job in analyzing all costs involved.”
Overall the report seems to indicate that there is some perspective on reality. I would still like to see some vision of the stepping stones, rather than so much focus on the horizon. I think that the XD* family of profiles are the core of the stepping stones as described in the IHE paper on building Health Information Exchanges.
I would like to promulgate that the Document exchange model in XD* is consistent with the PCAST report. That the use of a big object, Document, is an evolutionary tradeoff – a stepping stone. We start today with documents, that are easier to define, manipulate, and control. This does not mean that we don’t ever have an exchange that allows manipulation at the attribute level, who knows what the future holds. Where the DEAS is the XDS Document-Stored-Query transaction. The only difference is that XDS is document based. We choose an object size that is reasonable today. I laid this out for Privacy and Security, but not for the whole system http://healthcaresecprivacy.blogspot.com/2011/03/trust-and-pcast-model.html This seems to be what they are saying on Page 8, and seems to be part of the conclusion on page 14, and in a few of the sessions.
The discussion of “who is best positioned” found on pages 21-22 is interesting. It is interesting to see the criticism of HITSP being long and complex documents, while today the S&I Framework is doing that all over again. I was asked to review just the use-case document for Data Segmentation for Privacy, I couldn't get through the 68 pages. This is just the use-cases, 68 pages. I just gave up and gave in. Is there no way to learn from our mistakes? Each of these organizations are made up to show success through page count, not change.
The best quote I found in the document is from Page 23 on barriers to adopting Health IT “On page 60 of the PCAST report, it mentioned that it would cost between $20 and $40 million to develop standards for Interoperability. Overall, the participants of this session thought this cost range was a guess and therefore agreed that the PCAST report did not do a good job in analyzing all costs involved.”
Overall the report seems to indicate that there is some perspective on reality. I would still like to see some vision of the stepping stones, rather than so much focus on the horizon. I think that the XD* family of profiles are the core of the stepping stones as described in the IHE paper on building Health Information Exchanges.
Sunday, March 25, 2012
IHE ITI Educational Materials available
All material and webinar recordings are now available through the ITI Educational Material wiki page:
http://wiki.ihe.net/index.php?title=Current_Published_ITI_Educational_Materials
The topics covered by the IHE ITI education webinars:
- Interoperability, IHE and ITI Introduction
- Health Information Exchange: Enabling Document Sharing Using IHE Profiles
- Security and Privacy Overview
- Publication and Discovery
- Point-to-Point Transmission of Documents
- Cross-Community: Peer-to-Peer sharing of healthcare information
- Patient Identity Management
- Healthcare Provider Directories
- Cross-enterprise Document Workflow
Thursday, March 22, 2012
NwHIN-Exchange use of XCPD for Patient Discovery
Guest blog by Karen Witting, co-chair of the IHE ITI Planning Committee and prime on the XCPD profile
A subtlety inherent in the NwHIN-Exchange specification is the difference between a single match versus multiple matches from different parts of the responding community. Enabled in the specification is the ability for a responding system to represent multiple distinct entities, each using its own patient identity domain. A patient identity domain is represented by an assigning authority and the specific restriction in the NwHIN-Exchange specification is a single patient identifier per assigning authority. This means that while it is disallowed to provide multiple matches from which the initiating system must choose, it IS allowed to provide multiple matches which all must be used to have a complete view of the patient information from all entities represented by the responding system.
Privacy concerns associated with demographics sent: Determining the right set of demographics to be specified in the request and response resulted in lengthy debate which was partly documented in Appendix B in the NwHIN-Exchange Patient Discovery specification. The debate centered on the conflict between, on the one hand, the more demographics provided the more certainty of the match. On the other hand, the more demographics supplied the higher the privacy risk to the patient. Many felt that all available demographics must be provided to ensure the best possible match. Others wished for a much more restrictive set. Note that the participants in the NwHIN-Exchange are a constrained set of organizations that have agreed to the Data Use and Reciprocal Support Agreement (DURSA), but keeping the risk low is a core principle that everyone tries to achieve.
Another approach to the short-lived patient identifier would have been to adopt the revoke transaction supported by XCPD. It was decided not to adopt revoke due to concerns that in order for it to work it would need be mandatory and not every implementer wanted to deal with that extra complexity. So the re-discovery approach was adopted but the issue is still debated by some. In particular, revoke could support lifecycle events like merge and link.
Support for local matching algorithms: A primary requirement of the NwHIN-Exchange participants was the need for local autonomy in matching patients. Typically a demographic query results in the responding side performing a demographic match through a local algorithm and returning the results of that match to the initiator. NwHIN-Exchange participants wanted the option of verifying a match on the initiating side. Thus, requirements for demographics are necessary in both the request and the response in order to enable a possible initiating side matching step. The result is that a single query can cause two matching steps, first on the responding side which provides its opinion of a matching record, and then on the initiating side which is allowed to confirm the match using its local demographic algorithm. The interaction must enable this autonomy but be agnostic about the internal methods used for demographic matching.
Broadcast vs. Targeted Query: A central challenge when using a demographic query like XCPD is deciding who to send it to. NwHIN-Exchange does not supply a service which automatically locates and send the query to multiple participants. It is up to the initiator to do this work. Some have suggested that the query should be sent to every known endpoint. Finding every known endpoint is possible through the NwHIN-Exchange Service Registry. But sending to every known endpoint would result in bandwidth problems on the responding side as the NwHIN-Exchange scales to significant numbers of endpoints.
Resources:
As an example of the flexibility inherent in XCPD and the detailed analysis needed to refine it to fit local policy and requirements, I will review the primary discussion topics and results associated with developing the NwHIN Patient Discovery specification. This specification adopts XCPD with several refinements in function. Its adoption by the Nationwide Health Information Network Exchange (NwHIN-Exchange) in 2009 entailed detailed discussion and compromises.
Requirement for single exact match:
XCPD is designed to return a list of possible matches which are then narrowed by the receiving system. NwHIN-Exchange made a policy decision to restrict responses to a single match. This was due to the concern that returning patient information for a mis-matched patient would expose that data to a non-treating entity. Multiple matches suggests that some data is a mis-match so the exposure is likely. This policy brings its own challenges because the responding system may have more than one match that are equally good. To enable resolution of this case the NwHIN-Exchange specification makes use of the “maybe” capability provided by XCPD (see blog article on XCPD) to request additional demographics to aid in disambiguating the match. In fact NwHIN-Exchange extends this capability to add the ability to respond by requesting specifically an SSN value.
Requirement for single exact match:
XCPD is designed to return a list of possible matches which are then narrowed by the receiving system. NwHIN-Exchange made a policy decision to restrict responses to a single match. This was due to the concern that returning patient information for a mis-matched patient would expose that data to a non-treating entity. Multiple matches suggests that some data is a mis-match so the exposure is likely. This policy brings its own challenges because the responding system may have more than one match that are equally good. To enable resolution of this case the NwHIN-Exchange specification makes use of the “maybe” capability provided by XCPD (see blog article on XCPD) to request additional demographics to aid in disambiguating the match. In fact NwHIN-Exchange extends this capability to add the ability to respond by requesting specifically an SSN value.
A subtlety inherent in the NwHIN-Exchange specification is the difference between a single match versus multiple matches from different parts of the responding community. Enabled in the specification is the ability for a responding system to represent multiple distinct entities, each using its own patient identity domain. A patient identity domain is represented by an assigning authority and the specific restriction in the NwHIN-Exchange specification is a single patient identifier per assigning authority. This means that while it is disallowed to provide multiple matches from which the initiating system must choose, it IS allowed to provide multiple matches which all must be used to have a complete view of the patient information from all entities represented by the responding system.
Privacy concerns associated with demographics sent: Determining the right set of demographics to be specified in the request and response resulted in lengthy debate which was partly documented in Appendix B in the NwHIN-Exchange Patient Discovery specification. The debate centered on the conflict between, on the one hand, the more demographics provided the more certainty of the match. On the other hand, the more demographics supplied the higher the privacy risk to the patient. Many felt that all available demographics must be provided to ensure the best possible match. Others wished for a much more restrictive set. Note that the participants in the NwHIN-Exchange are a constrained set of organizations that have agreed to the Data Use and Reciprocal Support Agreement (DURSA), but keeping the risk low is a core principle that everyone tries to achieve.
The result is a minimal set of demographics required, which is: first & last name, gender and birthtime. In addition there is a set of -required-if-available- which means that these attributes must be included if the system has access to them and local policy and patient consent allows them to be shared. These are: one or more addresses, a single phone # and Social Security Number (SSN). These requirements apply to both the sender and the responder, which are both required to include demographics for the recipient to use for its local demographic matching process. Typically the address, phone and SSN are sent, along with additional names or aliases when available. Other optional attributes supported by XCPD are not commonly used.
Patient identifier mutability and revoke: A concern that came out of sharing local patient identifiers is the desire to allow them to change over time. Some participating systems were using short lived patient identifiers. XCPD makes no statement about the lifetime of a patient identifier but generally has assumed it is a long lived identifier. Because of the requirement to support short lived patient identifiers, the NwHIN-Exchange does stated that, once shared, an identifier can never be associated with a different person, but cannot be assumed to be valid forever. Once invalid the issuing system would send an error the next time the value is received and the holder is encouraged to re-discover that patient. So, once shared, a patient identifier is either valid, and still associated with the same original person, or is no longer valid. Much discussion centered around the opinion by some that re-discovery of the patient identifier at each patient encounter was necessary. Others saw concerns with this approach and preferred to re-discovery only upon error response. The specification supports either approach.
Patient identifier mutability and revoke: A concern that came out of sharing local patient identifiers is the desire to allow them to change over time. Some participating systems were using short lived patient identifiers. XCPD makes no statement about the lifetime of a patient identifier but generally has assumed it is a long lived identifier. Because of the requirement to support short lived patient identifiers, the NwHIN-Exchange does stated that, once shared, an identifier can never be associated with a different person, but cannot be assumed to be valid forever. Once invalid the issuing system would send an error the next time the value is received and the holder is encouraged to re-discover that patient. So, once shared, a patient identifier is either valid, and still associated with the same original person, or is no longer valid. Much discussion centered around the opinion by some that re-discovery of the patient identifier at each patient encounter was necessary. Others saw concerns with this approach and preferred to re-discovery only upon error response. The specification supports either approach.
Another approach to the short-lived patient identifier would have been to adopt the revoke transaction supported by XCPD. It was decided not to adopt revoke due to concerns that in order for it to work it would need be mandatory and not every implementer wanted to deal with that extra complexity. So the re-discovery approach was adopted but the issue is still debated by some. In particular, revoke could support lifecycle events like merge and link.
Support for local matching algorithms: A primary requirement of the NwHIN-Exchange participants was the need for local autonomy in matching patients. Typically a demographic query results in the responding side performing a demographic match through a local algorithm and returning the results of that match to the initiator. NwHIN-Exchange participants wanted the option of verifying a match on the initiating side. Thus, requirements for demographics are necessary in both the request and the response in order to enable a possible initiating side matching step. The result is that a single query can cause two matching steps, first on the responding side which provides its opinion of a matching record, and then on the initiating side which is allowed to confirm the match using its local demographic algorithm. The interaction must enable this autonomy but be agnostic about the internal methods used for demographic matching.
Broadcast vs. Targeted Query: A central challenge when using a demographic query like XCPD is deciding who to send it to. NwHIN-Exchange does not supply a service which automatically locates and send the query to multiple participants. It is up to the initiator to do this work. Some have suggested that the query should be sent to every known endpoint. Finding every known endpoint is possible through the NwHIN-Exchange Service Registry. But sending to every known endpoint would result in bandwidth problems on the responding side as the NwHIN-Exchange scales to significant numbers of endpoints.
So far the NwHIN-Exchange has not scaled particularly large (there are currently 25 exchange partners in the service registry). Even so the guiding principle in the NwHIN-Exchange specification is to do targeted queries, focusing on a limited set of partners that are most likely to have data. Finding that limited set could be through a regional type query or some information received from the patient. But targeted queries are recommended although broadcast queries are also allowed. Eventually the NwHIN-Exchange, as it grows, will need a better mechanism for selecting the targeted partners, perhaps through adoption of the patient location query (see prior XCPD blog) or some other mechanism.
Scaling challenges: Scaling was a topic of significant discussion during the development of the NwHIN-Exchange Patient Discovery specification. Demographic matching is a complex process and demands significant CPU and memory load. So scaling of the demographic matching process needs to be done in a way that systems do not become suddenly overwhelmed with requests.
Scaling challenges: Scaling was a topic of significant discussion during the development of the NwHIN-Exchange Patient Discovery specification. Demographic matching is a complex process and demands significant CPU and memory load. So scaling of the demographic matching process needs to be done in a way that systems do not become suddenly overwhelmed with requests.
As discussed in the XCPD blog, using a patient correlation engine like PIX scales better because it does the matching a-priori and can respond to requests without performing demographic matching. But this requires collecting all demographics into a single place and, at a national level, this was felt to be an unacceptable level of privacy risk.
So use of the NwHIN-Exchange Patient Discovery query must be limited to some degree and this suggests that some additional approach is needed. The Patient Location Query was designed to enable this capability by enabling the collection of patient specific record locations within various spots in the general network. NwHIN-Exchange has not adopted the use of this query at the moment but we hope that, as things evolve and the need for scaling becomes more pressing, further work on its adoption will come to fruition.
Another issue related to scaling is the need for “Deferred Mode” (see XCPD blog). NwHIN-Exchange first identified this need in its use of XCPD. Participants presented use cases where the response to an query could take days, or even weeks, to return. The use case involved the potential for human involvement in resolving the demographic match. The Web Services Addressing based asynchronous mode did not satisfy this requirement due to the limitations within the chosen Web Services toolkit. Thus NwHIN-Exchange defined and implemented the Deferred mode and IHE has since adopted it within XCPD.
Consent:
Consent enforcement is designed into every aspect of NwHIN-Exchange Patient Discovery specification and is an additional driver for some of the required capabilities. For example, a successful query may share a patient identifier of a patient that later withdraws from consent to share data. This is another reason why patient identifiers must be seen as mutable and re-discovery is used to discover that there is no longer a match for a previously matched patient. Again, revoke could have been used in this case but was not adopted. In sharing of demographics consent is fundamental to allowing whether to acknowledge the patient exists in the responding system and even which demographics can be provided. NwHIN-Exchange Patient Discovery is designed to allow flexibility on the part of the participating systems in processing various approaches to consent.
Two-way and one-way pull environments:
Most participants in NwHIN-Exchange are both receiving and sending data. For XCPD this means most participants will operate in Demographic Query and Feed mode (see XCPD blog). But some participants, like the Social Security Administration (SSA), will never have data for others to pull and need to do use Demographic Query only mode. So both these modes were adopted for NwHIN-Exchange. But the Shared/national Patient Identifier Query and Feed mode is not supported since the U.S. does not have a national patient identifier.
Conclusion:
Adopting XCPD for use in the United States NwHIN-Exchange resulted in many trade-offs. In many cases the functionality was constrained due to privacy and complexity concerns. Because of this the specification is good at the limited scope it was designed to operate in. But more work is needed, especially in terms of broad searches for patient specific information.
Another issue related to scaling is the need for “Deferred Mode” (see XCPD blog). NwHIN-Exchange first identified this need in its use of XCPD. Participants presented use cases where the response to an query could take days, or even weeks, to return. The use case involved the potential for human involvement in resolving the demographic match. The Web Services Addressing based asynchronous mode did not satisfy this requirement due to the limitations within the chosen Web Services toolkit. Thus NwHIN-Exchange defined and implemented the Deferred mode and IHE has since adopted it within XCPD.
Consent:
Consent enforcement is designed into every aspect of NwHIN-Exchange Patient Discovery specification and is an additional driver for some of the required capabilities. For example, a successful query may share a patient identifier of a patient that later withdraws from consent to share data. This is another reason why patient identifiers must be seen as mutable and re-discovery is used to discover that there is no longer a match for a previously matched patient. Again, revoke could have been used in this case but was not adopted. In sharing of demographics consent is fundamental to allowing whether to acknowledge the patient exists in the responding system and even which demographics can be provided. NwHIN-Exchange Patient Discovery is designed to allow flexibility on the part of the participating systems in processing various approaches to consent.
Two-way and one-way pull environments:
Most participants in NwHIN-Exchange are both receiving and sending data. For XCPD this means most participants will operate in Demographic Query and Feed mode (see XCPD blog). But some participants, like the Social Security Administration (SSA), will never have data for others to pull and need to do use Demographic Query only mode. So both these modes were adopted for NwHIN-Exchange. But the Shared/national Patient Identifier Query and Feed mode is not supported since the U.S. does not have a national patient identifier.
Conclusion:
Adopting XCPD for use in the United States NwHIN-Exchange resulted in many trade-offs. In many cases the functionality was constrained due to privacy and complexity concerns. Because of this the specification is good at the limited scope it was designed to operate in. But more work is needed, especially in terms of broad searches for patient specific information.
Obligatory disclaimer by Karen regarding IBM: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
- NwHIN-Exchange Specifications
- Patient Discovery Production Specification v1.0 [PDF - 214 KB]
- Patient Discovery Production Specification v2.0.doc
- Cross-Community Patient Discovery (XCPD)
- IHE webcasts, and presentations available for download.
- The Basics of Cross-Community Patient Discovery (XCPD)
- Patient Identity Matching
Updates:
- Fixed references to the Patient Discovery specification to version 2.0
- Fixed a unclear word in the conclusion
Wednesday, March 21, 2012
The Basics of Cross-Community Patient Discovery (XCPD)
Guest blog by Karen Witting, co-chair of the IHE ITI Planning Committee and prime on the XCPD profile
See previous blog article: Patient Identity Matching. IHE has a set of profiles all focused on Patient Identity Matching. Each has their focus. There is also recorded webcasts available for replaying, and presentations available for download.
To Centralize or not:
To start, XCPD assumes that the implementation of a centralized source of patient demographics, identifiers or record locations is unacceptable. This is a fundamental assumption because if a centralized source of these types of information is acceptable there are many more effective means of solving the problems XCPD solves. For instance, if all demographics can be sent as feed transactions to a central patient matching engine, like a PIX Manager, then a PIX or PDQ Query is much more effective at finding patient identifiers and correlating patients than any deployment of XCPD. In fact, XCPD was created only because there were countries which felt that collecting demographics for all the patients in a country or region into one system would be dangerous. So XCPD is only appropriate when it is used to link communities that are not closely tied and have only limited governance, in particular, are not willing to feed all patient demographics to a single, central server, or even a small number of duplicated central servers.
To Federate or not:
So, in environments where there is no centralized source of patient matching or patient record location, the only way to find things is to “ask around”. Many people have pointed out that this is inefficient and will not scale. That it is inefficient is self-evident and so those who can store the necessary content in a centralized service should do so. Whether XCPD will scale depends wholly on the degree to which a balance can be achieved between some regional centralization and federation between regions. A federated hierarchical environment is necessary, where local things are centralized locally and a reasonable number of peer-to-peer interactions are used to span across local regions.
The challenges are: what is a “reasonable number” and how do we encourage centralized approaches in every place where it makes sense. XCPD does not design the architecture; that is for organizations, nations, and regions to do. XCPD, plus many other IHE profiles and healthcare standards, can work together to allow interoperable participation in whatever the architecture turns out to be.
Once we have agreed that a wholly centralized source of patient matching is not acceptable – due to policy or security/privacy concerns or others – application of XCPD can begin.
Flexible Requirements:
XCPD is written to be flexible in its requirements. During the development of XCPD organizations throughout the world expressed requirements to the IHE team and these requirements were designed in as optional behavior. For instance:
- We want to use a national patient identifier instead of demographics.
- We want to only pull data, never support other peer networks to pull data from us.
- We want to restrict the demographics shared in a query/response due to privacy concerns.
- We want to restrict the number of matching responses due to privacy concerns.
- We want to collect patient record locations for selected patients and make that list available for other communities in order to enable efficient searching for the patients we select.
- We want to return multiple potential matches so that a local decision can be made for patient safety reasons.
- We want to avoid returning multiple potential matches due to privacy concerns but would like to request additional demographics to aid in disambiguation.
- We need to provide a cache value for matched patient matches to allow for refresh of that information.
- We need to revoke prior matched patient due to policy, consent or demographic changes.
- We need asynchronous behavior.
- We want to have enough security and privacy context to make an access control decision that respects the patient consent directives
Because of all this optional behavior, XCPD cannot be effectively implemented out of the box. An organization must first understand the environment in which it will be deployed, the balance of risks organizations are willing to accept (e.g. false negative/false positive criteria), and how the mitigations necessary for unacceptable risks should be applied to the optional behavior of XCPD.
Modes of Operation:
There are a few modes of operation of XCPD to allow for different types of typical interactions:
- Demographic Query and Feed mode: This is the base mode and allows the initiator to both feed knowledge or a local patient as well as query for a match at the responder site. This mode allows both sides of the transaction to learn about matching patients.
- Demographic Query only mode: Here we see a simple demographic query, asking only for the responder to provide a match to a local patient. This mode does not allow responder to know about a match with the initiator’s patient. This satisfies the requirement for nodes that will not support other peer network pull requests.
- Shared/national Patient Identifier Query and Feed: This mode includes in the query only identifiers, no demographic data is shared.
XCPD asynchronous behavior:
The XCPD Patient Discovery transaction, the base transaction discussed here, supports three interaction modes each of which satisfies a different interaction environment. The chapter references given are within the XCPD supplement.
Description
|
Target environment
|
|
Normal (synchronous)
|
the SOAP request and response are carried in one TCP/IP connection
|
Response is expected in very short timeframe, e.g. less than 60
seconds
|
Asynchronous
|
where the WS-Addressing support for “replyTo” is used to return the
response through a separate TCP/IP connection
|
Response is expected in short timeframe, e.g. less than 60 minutes
|
Deferred
|
where the request is decoupled from the reply so each are a separate
SOAP interaction (Section 3.55.6.2)
|
Response is expected to have lengthy delay, e.g. more than 60 minutes
|
XCPD as a Record Locator Service (RLS):
A challenging requirement addressed by XCPD is the ability to use XCPD as a form of Record Locator Service. There are several capabilities built into XCPD which enable this functionality. But, as with everything else in XCPD, the architecture for doing this is not spelled out by XCPD since a variety of architectures may be desired. Instead there are hooks within XCPD which enable the capability to be designed. The particular capabilities for enabling this functionality are:
- Carry senders homeCommunityId and local patient identifier in the request (see 3.55.4.1.2.4)
- Carry responders homeCommunityId in response (see 3.55.4.2.2.4)
- Specify support for Health Data Locator in response (see 3.55.4.2.2.5)
- Specifies the “Patient Location Query” optional transaction
Security Considerations:
See the Patient Identity article for a short explanation of Security Considerations. The shorter is that XCPD uses SOAP, and therefore can be grouped with XUA. Thus there is plenty of security/privacy information available for a Privacy and Security decision to be made. Thus things like Privacy Consent can be enforced right away at Patient Identity discovery.
Conclusion:
XCPD is just a profile from IHE, in order to understand how it works and determine the scalability of the solutions that it enables one must include specific Policies, operational environment factors, application workflow, functionality, and safety/security/privacy. There likely is no perfect solution, the problem of patient identity discovery is a very hard one. If you can centralize or have a unified Identity, then that is a solution that is going to be far more efficient.
A more detailed and specific examination of the way that the NwHIN Exchange uses this profile is at NwHIN-Exchange use of XCPD for Patient Discovery. This shows some of the ways that an operational environment can constrain XCPD, and some of the ways that one can get in trouble.
Obligatory disclaimer by Karen regarding IBM: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Resources:
Tuesday, March 20, 2012
How to apply Risk Assessment to get your Security and Privacy and Security requirements
Often times I am asked for a set of security and privacy requirements. My response is that there is no set security and privacy requirements, that a Risk Assessment approach is the proper way. This leads to a discussion about how to do a Risk Assessment. This article is simply a short article on how to apply Risk Assessment. There are books written on the topic, there are consultants that only do risk assessments, there are many tools. This a simple concept, but is not easy to apply. However getting started is the most important part, things do really fall right out. So you need to just get it started.
The problem with a set of security or privacy requirements is that they are prescriptive, and that they are written to cover a whole range of products. Thus they tend to not be sufficient to fully address the security and privacy problem, while at the same time being too much.
Security is best done through use of a Risk Assessment that is focused on security and privacy risks. That is one looks at the problems, not at the solutions. The solutions are applied carefully to address the problems in a priority way. That priority comes from the assessment of the risk. In this way we apply just enough technology to address real-world needs. We stay away from applying technology purely because it is cool technology. We stay away from worrying about risks that are not a true priority.
Note this same system works for small organizations, large organizations, small systems, large systems. It works quite well with mobile health devices.
You will need some spreadsheet that you can use to identify the risks and do the analysis. If you have a spreadsheet that is used for some other risk domain, it is likely re-usable in this domain. You might have a risk assessment spreadsheet for "Patient Safety". There are suggestions in NIST 800-30, there are suggestions in the References below. Use what you are use to using.
Given the sections in NIST 800-30:
A good source of things to worry about can be found in documents called Protection Profiles. A protection profile is a way of documenting the requirements for a product of a specific category. The following is a good one for our medical information systems to follow. Protection Profile for Single-level Operating Systems in Environments Requiring Medium Robustness
References
The problem with a set of security or privacy requirements is that they are prescriptive, and that they are written to cover a whole range of products. Thus they tend to not be sufficient to fully address the security and privacy problem, while at the same time being too much.
Security is best done through use of a Risk Assessment that is focused on security and privacy risks. That is one looks at the problems, not at the solutions. The solutions are applied carefully to address the problems in a priority way. That priority comes from the assessment of the risk. In this way we apply just enough technology to address real-world needs. We stay away from applying technology purely because it is cool technology. We stay away from worrying about risks that are not a true priority.
Note this same system works for small organizations, large organizations, small systems, large systems. It works quite well with mobile health devices.
Risk Assessment as a flow
I have cast this same material formally for doing security and privacy risk assessments in HL7 and IHE;
This has also been cast in IEC-80001 for doing a risk assessment when attaching a Medical Device to a network. It recognizes that the Medical Device should have clear controls applied to clear risks, and indicate risks that might flow into the operational risk assessment. Thus the operational environment can pick up from those controls and risks identified by the Medical Device product developer.
I have cast this same material formally for doing security and privacy risk assessments in HL7 and IHE;
- How to Write Secure Interoperability Standards
- IHE, Cookbook for Security Considerations in IHE Profiles
- HL7, Cookbook for Security Considerations in Standards
This has also been cast in IEC-80001 for doing a risk assessment when attaching a Medical Device to a network. It recognizes that the Medical Device should have clear controls applied to clear risks, and indicate risks that might flow into the operational risk assessment. Thus the operational environment can pick up from those controls and risks identified by the Medical Device product developer.
- IEC 80001 - Risk Assessment to be used when putting a Medical Device onto a Network
- More Webinars on Basics of IEC 80001
- IEC 80001 - Security Technical Report presentation
How to do it
The best document that I have found is a document by NIST SP 800-30 Risk Management Guide for. Information Technology. It is an old document, but the concepts have not changed much. There is a draft revision going on, so you might also find it useful "DRAFT Guide for Conducting Risk Assessments"
SP800-30-Rev1-ipd.pdf. There are many documents on risk assessment, or even risk assessment applied to security. See the References below for a long list. I like the NIST publication because it is clear and clean.
The best document that I have found is a document by NIST SP 800-30 Risk Management Guide for. Information Technology. It is an old document, but the concepts have not changed much. There is a draft revision going on, so you might also find it useful "DRAFT Guide for Conducting Risk Assessments"
SP800-30-Rev1-ipd.pdf. There are many documents on risk assessment, or even risk assessment applied to security. See the References below for a long list. I like the NIST publication because it is clear and clean.
You will need some spreadsheet that you can use to identify the risks and do the analysis. If you have a spreadsheet that is used for some other risk domain, it is likely re-usable in this domain. You might have a risk assessment spreadsheet for "Patient Safety". There are suggestions in NIST 800-30, there are suggestions in the References below. Use what you are use to using.
Given the sections in NIST 800-30:
- Table 2-1 helps the group understand that risk assessments are done at many levels, not just once. This is this is indeed the concept of doing risk assessments for standards, for medical devices, and again for network integration.
- Figure 3-1 shows the process of doing the security risk assessment. YES it is a long process, but it is a simple process once you have done it once. The same process is really just a formalization of something everyone does naturally when they assess if they should cross the street
- Table 3-1 recognize that there are many different motivations. Yes we need to worry about all of these, but don't get yourself wrapped around any one too tightly.
- Table 3-2 security risks are made up of pairs of vulnerabilities and threats that use them. The table is often not this simple, but you should get the picture.
- Don't spend too much time reading section 3.3… brain storm is the best approach. If the brain storming doesn't satisfy people, then bring in some of section 3.3. Most of the time a reasonably experienced group will come up with a good set of risks.
- section 3.3.1 tries to help you gather sources of documented vulnerabilities that likely already exist. Since you are starting fresh, this may not be that useful.
- section 3.3.2 -- Determine what are the Vulnerabilities of the system
- table 3-3 this is where we recognize that not everything is caused by technology
- THIS IS NOT DOCUMENTED IN NIST 800-30 because it is healthcare specific… if a risk looks, smells, tastes, or feels like a patient safety risk… do NOT continue to assess it in the security risk assessment… MOVE it to a patient safety risk assessment. ALSO patient safety mitigation must be examined to see that they have not introduced security risks that can't be mitigated.
- Section 4 you have some options to handle these prioritized risks…This section is worth quickly looking at, it should seem familiar.
- there needs to be a management defined threshold for which you don't worry about the risks. This is usually where everyone fails to set thresholds up-front, thus causing unnecessary analysis and resource utilization. Often times it is simply a statement of lowering the risk to as low as 'reasonably possible'. Where 'reason' is used.
- Section 4.4.1 is useful to look at. It reminds us that there are supportive controls, preventative controls, and detection controls.
- Don't forget to use the cost-benefit analysis. This is where we get concerned that the fix needs to be reasonable given the pain.
- Residual Risk -- there is some risk that can't be controlled. Risk is never brought totally to zero. Usually these risks flow down, ultimately at some operational level these residual risks get covered with Insurance.
- Appendix A might help the brainstorming session…
- Appendix C -- an option for a spreadsheet.
A good source of things to worry about can be found in documents called Protection Profiles. A protection profile is a way of documenting the requirements for a product of a specific category. The following is a good one for our medical information systems to follow. Protection Profile for Single-level Operating Systems in Environments Requiring Medium Robustness
References
- JOKE: The Security Risk Assessment formalization should NOT use a tool like http://www.crypto.com/bingo/pr
- IEC 60812 Ed. 1.0: Analysis Techniques for System Reliability - Procedure for Failure Mode and Effects Analysis (FMEA)
- NIST SP 800-30: Risk Management Guide for Information Technology Systems http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf
- NIST DRAFT Guide for Conducting Risk Assessments SP800-30-Rev1-ipd.pdf.
- ISO 14971:2000: Application of risk management to medical devices
- ISO 17799 (2000) Information Technology - Code of practice for information security management
- MIL-STD-1629A, Procedures for Performing a Failure Mode Effects and Criticality Analysis, November 24, 1980 Australian Standard AS4360:2004 Risk management
- Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) Framework
- Wikipedia: Risk Management http://en.wikipedia.org/wiki/Risk_management
- OODA: Observe Orient Decide Act
- IEC 61025 Fault Tree Analysis
- IEC 61882 HAZOP Application Guide
- Carnegie Mellon Software Engineering Institute, Software Risk Evaluation Method, Version 2.0 http://www.sei.cmu.edu/pub/documents/99.reports/pdf/99tr029-body.pdf
- SAE J-1739, Potential Failure Mode and Effects Analysis in Design and Potential Failure Mode and Effects Analysis in Manufacturing and assembly Processes Reference Manual, Aug 1, 2002
- IHE, Cookbook for Security Considerations in IHE Profiles
- HL7, Cookbook for Security Considerations in Standards
- 10 Steps to Creating Your Own IT Security Audit January 2007, IT Security Magazine
- ISO TS 25238 Health informatics: Classification of safety risks from health software
- ISO TS 27809:2007 Health informatics: Measures for ensuring patient safety of health software
- ISO/NWIP/DTS 29321 Health informatics: Application of Risk Management to the Manufacture of Health Software
- ISO/NWIP/DTR 29322 Health informatics: Guidance on Risk Evaluation and Management in the Deployment and Use of Health Software
- ISO/IEC 80001 Application of risk management to IT-networks incorporating medical devices
Monday, March 19, 2012
I do NOT need more meetings
I hate to rant too much.. but I really don’t need more meetings. Here is a view of this week, a typical week. Yes my meetings start at 7am, meaning I am regularly up and preparing for my day at 6am. Yes I have days with 8 hours of meetings. Yes people expect me to make it to 3 meetings at the same time.
Sunday, March 18, 2012
Policy Enforcing XDS Registry
In the S&I Framework - Data Segmentation for Privacy we are looking at the standards that could satisfy one of the use-cases. The usecase is one where a Health Information Organization (HIO) is told about the patient consents, and is responsible for enforcing that consent. This arrangement has been implemented in a few HIO that I have been apart of , and I know that there are others. The advantage of this arrangement is that the Edge systems (EHR, PHR, Departmental, Imaging) don't need to concern themselves with consent. It is magically done in the HIO. This nicely centers all the consent logic. The disadvantage is that the passing of the security context from these edge systems to the central core needs to be far more complex to handle the vast number of exceptions to the rules (e.g. Break-Glass). It is a nice clean way to get an HIO going.
In this case we are actually defining a new system that will leverage multiple XDS Actors/Transactions. The IHE use of Actor is context independent. So, although an XDS Registry actor from IHE may seem like the Actor that implements the core of the HIO; it just appears to look this way. The HIO is actually made up of many things. In the terms of HITSP, we are creating compositions out of Actors; but that might be ancient history.
Key notes for understanding the Diagram
In this case we are actually defining a new system that will leverage multiple XDS Actors/Transactions. The IHE use of Actor is context independent. So, although an XDS Registry actor from IHE may seem like the Actor that implements the core of the HIO; it just appears to look this way. The HIO is actually made up of many things. In the terms of HITSP, we are creating compositions out of Actors; but that might be ancient history.
XDS Background
XDS is equally good at managing clinical documents as it is consent documents. So Publication of a consent document is the same as publication of a clinical document (see 1 in the figure at the right). The abstraction of XDS allows for a central document Repository, but can also handle where these are distributed; likely hosted in the publishing organization. The Central Registry just holds document entry metadata, and naively supports data segmentation based on any of the metadata entries; most often this is simply confidentialityCode. But can also be by document type (clinical vs consent), authoring organization (kaiser, va, betty-ford).
The XDS Query is equally good at querying for clinical documents as it is for querying for consent documents (see 3 in the figure at the right). They are all simply documents to this abstract transaction. The context (clinical document vs consent document) is simply a difference in the query parameters. And retrieving the consent document it-self is done just like any clinical document (see 4 in the figure at the right).
Lets start with the most outside view:
- A System (EHR, PHR, etc) queries the HIO for all longitudinal documents on Patient X.
- The HIO looks in the registry and compiles a response with all the documents matching the query
- The HIO returns the response to the query back to the System that made the request.
This is Classic XDS, and from the view of the "System"; this is exactly what it will do.
Policy-Enforcing-Registry:
The Magic is that the HIO is actually a complex system: The system that actually receives this query is the new uber-Registry -- Lets call it a "Policy-Enforcing-Registry".
- This Policy-Enforcing-Registry will be intercepting the query and
- doing some initial Access Control decisions. Very much like XACML is modeled with a PEP/PDP.
- Of the Access Control decisions that this Policy-Enforcing-Registry does, it needs to
- determine what the current state of consent is. It can do this by it-self becoming a Document Consumer actor, and formulating a very specific query for ‘consent’ documents on Patient X.
- Depending on the response and the information in metadata, and the information cached; it
- might need to also be a Document Consumer actor and pull the consent documents found from the Repository that it exists in, as defined by the metadata.
- Ultimately it will determine what the consent rules are, along with all the other various rules in the HIO (e.g. Role-Based-Access-Control, Break-Glass, Organizational-restrictions, etc).
- At this point the Policy-Enforcing-Registry simply knows if it should reject the original Query, or allow it to partially happen.
- If the patient has not given positive consent, then an audit log should be recorded; and the original query returned with a failure of some kind (Some HIO like to say that consent is missing, other HIOs like to act like the patient doesn’t exist).
- If the query should be allowed, the Policy-Enforcing-Registry will allow the original query to be executed; but intercept the response.
- It now needs to inspect the response to determine if there is specific concerns between the access control rules and the content.
- It might need to remove some or all of the returned metadata entries.
- It can then return the legitimate results to the original clinical system; possibly attaching constraints/obligations.
Key notes for understanding the Diagram
- Yellow circles show sequence of transactions.
- transactions 1 and 5 are broken into two parts simply to show time passing between the time the EHR sends their request and the time they get back their response to that query. This is all just one XDS transaction – “Registry Stored Query”. As far as the EHR is concerned it has just done a simple XDS transaction. It is totally unaware of all the processing happening inside the Policy-Enforcing-Registry.
- Transaction 2 and 4 are also just XDS transaction “Registry Stored Query”.
- Transaction 2, 3, and 4 are shown as double-head arrows because there is no need to show time passing between the request and the response. So for this diagram they are considered combined for simplicity.
- the Repository could exist inside or outside the Policy-Enforcing-Registry box.
Yes, the XDS profile was designed with this Policy-Enforcing-Registry in mind; but it is a systems design that puts the parts together. IHE only defines profiles to the point of assuring Interoperability, never including all possible systems that could be designed using those profiles.
Updated Noon March 19th
During the S&I Framework discussion of this approach, the question came up if there is a strict adherence to the sequence of events and definition of the transactions. Underlying this question could be many things, but I know that one of the approaches is to leverage XACML concepts. I offer the following diagram that integrates XACML policy repository into the picture without changing the XDS transactions. Essentially what this new diagram shows is that there is some task that is responsible for adjudicating the policies as they are created or updated; thus when the EHR asks for clinical data the policies are much quicker and more simple to execute.
I left the yellow sequence numbers the same where the transaction didn't change. Clearly they no-longer represent the order of events. There are now two distinct flows:
Resolve Policy
Updated Noon March 19th
During the S&I Framework discussion of this approach, the question came up if there is a strict adherence to the sequence of events and definition of the transactions. Underlying this question could be many things, but I know that one of the approaches is to leverage XACML concepts. I offer the following diagram that integrates XACML policy repository into the picture without changing the XDS transactions. Essentially what this new diagram shows is that there is some task that is responsible for adjudicating the policies as they are created or updated; thus when the EHR asks for clinical data the policies are much quicker and more simple to execute.
I left the yellow sequence numbers the same where the transaction didn't change. Clearly they no-longer represent the order of events. There are now two distinct flows:
Resolve Policy
- Based on a new policy being registered, or some other 'event' the 'Resolve Policy' service would use the #2 "Query (consents)", and #3 "Request Consent Documents"
- Pull the existing policies from the XACML Policy Repository using #6
- Resolve these new policies with the existing policies
- Push the updates back into the XACML Policy Repository using #6
Real-time Query for Clinical Content
- Query request for clinical content #1
- Access Control engine looks up existing policies from XACML Policy Repository using #7
- If access should be granted then the Query request for clinical content is reflected in #4
- The results is inspected relative to the existing policies
- The results appropriate is returned.
There is no lesser or greater functionality, although this model is more likely to execute fast for the Real-Time Query. To me this model is exactly consistent with the above model, it is just optimized for performance; something that I would expect good service developers would do anyway.