Sunday, January 31, 2010

Patrick Pyette: The Case for Privacy Accounting

Here is another guest blogger: Patrick Pyette. Pat is a friend and very active security/privacy standards geek.

I started this exercise as a response to Glen Marshall’s submission to the HIMSS Working Group around disclosure accounting, but it has taken on a life of its own.   It’s now my attempt to explain why privacy accounting (a term that I’ll define a bit later) is critically important, even in relation to quality care and availability. 

Before I begin, some disclaimers are probably in order:  I’m a Canadian, and my views (and spelling) are coloured by that background.   I do not represent any particular organisation, jurisdiction, or agenda, other than my own.   And while my involvement in healthcare privacy and security issues has exposed me to legal issues, especially when it pertains to consent, I am not a lawyer.

The first thing to note is that the disclosure accounting requirements are based upon the Individual Access and Challenging Compliance principles of fair information practices. 
  • The Individual Access principle states that a patient has the right to, upon request, be informed of the existence, use, and disclosure of their personal information.  
  • The Challenging Compliance principle states that a person has the right to challenge an organizations compliance with the other principles (depending on the country, there are a total of between 8 and 10, but they all cover substantively the same ground.  
The U.S., Canada, all of the EU countries, Japan, Australia, and Argentina (to name more than a few) have adopted substantively similar principles (see the references below for links to various versions of these). The message here is that this is a global issue and the solution needs to be viewed in that light.  

Disclosure accounting is only part of the requirement, as far as I’m concerned.   Accounting of collection, use, and disclosure of personal (health) information (I’ll refer to this as “privacy accounting” from here on in) is something that we need our health systems to do efficiently and effectively.   I do recognize that mandatory auditing of collection and use of personal health information is generally not a requirement in the U.S. but I would question the ability to establish accountability without it.

The reason is simple:  Trust.   Without trust in the electronic health systems that we are building, people (both patients and providers) will not want to disclose accurate information to those systems.  The result will be a stranded investment of immense proportions without any of the benefits in terms of outcomes and reduced cost that are purported to be achievable by the interconnection and interoperation of these systems.  

I was amazed to learn last week at the HL7 Working Group Meeting that several of the U.S. states have enacted legislation that gives people the right to consent (or withdraw that consent) to the communications channel that personal heath information collection, use, and disclosure can take place over!  This appears to me to be a direct result of the lack of trust that is being placed in the systems that are being designed, implemented, and operated to enable the exchange of health information today. 

If we can’t demonstrate the trustworthiness of these systems and the accountability to which we hold the users of those systems (via things such as privacy accounting), then more states will adopt similar legislation and will make it increasingly difficult to realize the benefits that are not just desired, but necessary if we hope to provide healthcare supported by better information at a lower cost. 

Some may argue that the legislation is intended to shield providers from liability for the inevitable breach  that will occur as a result of communicating personal health information over a Health Information Exchange (or interconnected EHR). I would assert that breaches are an inevitable part of information exchange.   If the information was transmitted by dog-sled, there would still be a breach at some point.   What we want to try to do is minimize the risk of that happening, and the subsequent damage done. The problem is that in order for consent to be valid, it must be informed.  

I’m a pretty technology-savvy guy, with a fair bit of understanding of privacy and security issues, and I can’t properly assess the risk of giving consent for a particular HIE without much more information than would be reasonable to provide (for security reasons).   If my consent is invalid (as it is not informed), then certainly my Aunt Mary’s consent would be also, as would everyone else that I know.   As a result, we’re effectively left with a requirement to provide consent in those states that have that legislation and no real way of doing so (Note that this is my opinion only, and remember – I’m not a lawyer.  I have no indication that this has been tested in the courts).   An Electronic Health Record?  An HIE?   Those states have effectively made those terms irrelevant, as they could never be used as intended.  And if they were, the liability still rests with those who have custodial responsibility for the information - as it always has.

The only way forward that I can see is to build enough business and technical safeguards into these systems (I include people, processes, and technology in this concept of a “system”).   Policies need to be established that go well beyond the security and privacy floors that are legislated, so that enough trust can be established with all stakeholders that these systems can start to provide the benefits that we believe are possible.   Privacy Accounting is one of those safeguards that I believe are required.

Will it cost more if we do this?   Absolutely. 

Will it cost even more if we don’t?  Absolutely.


Canadian Standards Association: Privacy Code:

U.S. FTC Fair Information Practice Principles:

U.S. Office of the National Coordinator (ONC), Nationwide Privacy and Security Framework For Electronic Exchange of Individually Identifiable Health Information

Organisation for Economic Co-operation and Development: OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,3343,en_2649_34255_1815186_1_1_1_1,00.html

Thursday, January 28, 2010

Forbes: The Next Health Care Debate: Digital Privacy

This is a well written article about the 'latest' healthcare privacy survey. This is yet another survey that shows that the general public 'answers surveys as if they don't trust anyone'. I put that in quotes not because it is a quote, but rather that I wonder just how the questions were asked. There is a big difference between answering a survey that is clearly intended to get you angry about privacy, and actions. I don't think that we will know how this will stack up over the years. Meaning we need to wait to see what the general public actually does.  We know that the general public is very happy using web sites that clearly track their behavor.
As President Obama has learned over the last year, Americans tend to get angry when you try to fix the country’s dysfunctional health care system. But even as the national debate over universal coverage drags on, there's another sticky issue ahead for health reform: digital privacy.

In a study released Monday by the privacy-focused Ponemon Institute, Americans registered a deep distrust of anyone in either the federal government or private industry who might store digital health records like those that the Obama administration has encouraged hospitals to create. Of the 868 Americans surveyed about their views on digitizing and storing health records, only 27% said they would trust a federal agency to store or access the data--the same percentage as those who would trust a technology firm like Google, Microsoft or General Electric.
That distrust, says the Institute's director Larry Ponemon, could represent a roadblock to the Obama administration's push for electronic health records, backed up by $19 billion in grants included in the economic stimulus package passed last February. "The takeaway message is that people still care about privacy," says Ponemon. "There's a lot of angst around centralizing this information, no matter whether it's managed by private enterprise or government."
A key finding  (bold is my emphasis)

In fact, 71% of respondents to Ponemon's survey were amenable to letting hospitals, clinics or physicians store their health records. And 99% said a patient's own doctor should be able to access his or her digital health records stored in a national system. But only 38% said that a federal government agency should be able to access those records, and only 11% thought that private businesses should have access.
The mechanisms that HITSP has been identifying, NHIN implementing, HIT-Standards selecting, and Meaningful Use driving toward are not necessarily going to 'give' anyone the rights to access. Those rights need to be authorized. This authorization is part of overall governance, which is a part of Federal policy making; but is also a factor of the Privacy Policies that the patient goes into. Patients need to be very careful to read these policies and push-back when they say something unreasonable. Most Privacy Policies have allowed the patient to completely opt-out. That said, I am sure that the exclusions found in HIPAA around legal mechanisms will very likely continue and with Homeland-Security privacy 'rights' will continue to decline.

The whole article

Tuesday, January 26, 2010

OASIS: Making Privacy Operational

This Webinar is an expiratory discussion to see if there is interest to create a Privacy Management Technical Committee in OASIS. It might be very helpful to our cause in Healthcare to bring in standards beyond Healthcare together. Moving the Privacy discussion out of proprietary implementations into standards is a really good thing. We can do this alone in Healthcare standards like HL7, but we will only cover the space controlled by HL7.  I plan to attend.

OASIS presents a complimentary webinar to discuss the anticipated formation of a new privacy management technical committee.  The TC would be based on the 'Privacy Management Reference Model' produced by the International Security, Trust, and Privacy Alliance (ISTPA), which will be described in the webinar.

Data privacy is the assured, proper, and consistent collection, storage, processing, transmission, use, sharing, trans-border transfer, retention and disposition of Personal Information (PI) throughout its life cycle, consistent with data protection principles, privacy and security policy requirements, and the preferences of the individual, where applicable.
Today, increased cross-border and cross-policy domain data flows, networked information processing, federated systems, application outsourcing, social networks, ubiquitous devices and cloud computing bring ever significant challenges, risk, and management complexity to privacy management.  

Privacy requirements are typically expressed as broad policy objectives (fair information practices and principles) that are far removed from the rigorous requirements' expressions needed by system analysts, architects and developers.  The purpose of the proposed Privacy Management Reference Model TC will be to define a structured format for describing privacy management Services to support and implement any privacy requirements, but at a functional level.

The Reference Model will serve as a template for developing operational solutions to privacy issues, as an analytical tool for assessing the completeness of proposed solutions, and as the basis for establishing categories and groupings of privacy management controls.

Who should attend:
Privacy policy makers, privacy and security consultants, auditors, IT systems architects and designers of systems that collect, use, share, transport across borders, exchange, secure, retain or destroy Personal Information.

Date: Tuesday, February 23, 2010
Time: 11:00 AM - 12:00 PM EST
For more information and to register (Updated URL:

Monday, January 25, 2010

CCHIT - Security Certification Gap Analysis against the IFR

CCHIT has done a gap assessment between the IFR and their current security certification criteria that they updated for ARRA. Given that they know they were the only place in town for Certifications, yet also knowing that they might not be in the future; they worked hard to align their Certification early. They also indicated that any changes to the Certification testing caused by changes in the regulation would be smoothed over in their testing with potentially simple gap testing rather than whole re-testing. This has seemed like a good approach.

So, with security criteria their gap assessment indicates that there is few issues. The majority of their security criteria meet the IFR text, or exceed. They had added Kerberos, and it turns out that was not needed.

They are just as confused as the rest of us on what cross enterprise authentication really means (See Federated ID is not a universal ID). They are also confused about the new Accounting of Disclosures (See ATNA and Accounting of Disclosures). In both of these cases they put conditionals on the need for certification that are not clear to me. I think that these will be made more clear in final form and be essentially not requiring anything as I have already discussed in the other articles.

I have pointed out that the current CCHIT criteria are a more specific and reasonable set of criteria, this should not be too surprising since I was co-chair of the Security workgroup during much of their development. But I also point out that the IFR can NOT be as specific as CCHIT can be simply because the IFR will be regulatory text that is hard to change and thus hard to evolve over time. The IFR must therefore set goals in broad terms and not be prescriptive. Here I recommend that the IFR lean on Security Risk Assessment to stay out of the problem. See Meaningful Use Security Plan

Thursday, January 21, 2010

Accounting of Disclosure Challanges and Top Information Security Concerns

The following is contributed by Glen Marshall. He sent this in an email earlier today to the HIMSS workgroup. I felt it was so well done that I asked him if I could post it on my blog.

HIMSS Information Security Workgroup members were recently asked to:
  • Provide information describing administrative, operational and technical challenges, burdens or barriers for the [ARRA] expanded accounting of disclosure requirement.
  • Forward their top three information security concerns for 2010.
Here are my responses:
1.    The biggest technical challenge is that "accounting of disclosures" is a larger matter than security auditing, but many people do not understand this.
  • Key Issues:
    • The  collection of security audit data for healthcare is standardized, e.g., by HITSP/T15 and the underlying IHE ATNA specification.
    • As a practical matter, many security audit data that could be reportable disclosures are not captured according to the HITSP standard, e.g.:
      • legacy system issues
      • vendors' non-conformance
      • reliance on cross-industry security audit data in the operating platform.  
    • In addition, under the HIPAA privacy rule provisions still in force, a complete accounting of disclosures must include non-electronic disclosures that are not within the TPO exclusion.  
    • The baseline of data defined in HIPAA and ARRA is less than that defined by HITSP/T15, and is less data than would be reasonably required for disclosure reporting.  
    • The means to normalize and select audit data and identify reportable disclosures is not standardized.  This includes the need to define standardized vocabularies for data query and selection that are provided for in the underlying security audit record defined by RFC 3881.
    • The means to identify and collate events that occur among multiple enterprises and computer systems is not standardized, at least not beyond the audit repositories defined in HITSP/T15.
    • The form of reporting for disclosures is not standardized.
    • The means to identify recipients of accounting of disclosures and send the reports to them is not standardized.
    • The means to request an accounting of disclosure, who can request it, how, where, to whom, etc. is not standardized.
  • This all suggests a non-trivial application system design issue.  It also suggests a market need that could be met by vendors, and that will not be free and probably will not be met by mature products in time for regulatory compliance.  
  • My guess is that we'd have suitably mature products available no earlier than mid-2011 if work starts now.  There will be premature announcements before then, of course.
2.     My top three concerns:
  • Lack of specificity in current federal rulemaking with respect to healthcare IT security and infrastructure standards, with consequent non-interoperability in implementations, will produce near- and mid-term havoc in healthcare provider networks.  The resulting costs will eat-up funds that would be better spent of care quality and availability.
  • ONC does not acknowledge that the CDA-vs-CCR debate was settled amicably over two years ago, and that the CCR standard explicitly excludes itself for the ON-defined uses. (The CCR standard clearly states in section 1.3.1 of the standard that "The CCR XML is not a persistent document, and it is not a messaging standard." And defines persistent document in section 3.1.41  as "a document that remains as a document within a data structure or file system once it has been used for its original intended use.")  This will cause added delays and costs to healthcare IT systems, eating-up funds that would be better spent of care quality and availability.
  • In its regulatory language, ONC does not adequately distinguish requirements for system-to-system interactions versus end-user interactions.  The "debate" about SOAP versus REST flourishes due to this, and the resulting lack of focused conclusive leadership will add delays and costs to healthcare IT systems, eating-up funds that would be better spent of care quality and availability.  

Monday, January 18, 2010

Top 10 posts of 2009

I noticed, only today, that Keith had posted his top 10 blog postings of 2009. Given that I have the same google analytics following my blog, I figured I would look at what it says.

The article that stands far ahead (seven times more hits than any other article) is my post this year, Meaningful Use clearly does not mean Secure Use.But this article was posted in 2010, so it doesn't qualify. Another top entry is the article Meaningful Use - Security Plan is also in the top 10, but doesn't qualify as it was posted in 2010.

Here is my top hitters starting with the most hit:
1. Observation on REST vs SOAP Which is really a rant about the non-transparency of rule-making
2. Federated ID is not a universal ID
3. Consumer Preferences and the Consumer
4. HITSP August 2009 face-to-face -- Security, Privacy and Infrastructure
5. Double Standard?
6. What has HITSP done to protect confidentiality with a suite of implementable security standards
7. Current Security and Privacy developments
8. Kerberos required in 2011 then forbidden in 2013
9. Web-Services RESTful vs SOAP
10. ATNA and Accounting of Disclosures

I am not going to give the statistics on these pages as they are such low numbers... I do thank everyone that does read, comment, or just send me supportive emails.

Tuesday, January 12, 2010

ONC to test re-identification of protected data

I am very interested in this effort by ONC, but not expecting much from it given the scope. This is a needed first step that should continue to analyze the topic.

There has been many efforts to define how to de-identify data, including the famed 18 identifiers in HIPAA for healthcare data. HITSP has identified a set of anonymization constructs (C25, C87, and C88) and a construct for creating/managing pseudonyms (T24). These all were developed using the model defined in ISO Health Informatics -- Pseudonymization, Technical Specification ISO/TS 25237. This is a globally defined standard that brings together many of the the best thinking on the topic and many of the best practices. I have tried in all work that I have had touch with to be very clear that de-identification can only lower the risk, it can not remove the risk. The best use of de-identification is to have a very specific intended use and to remove all attributes that are not necessary for that intended use. I have outlined much of this problem in prior blog post:
De-Identification is highly contextual
There have been many proofs that this kind of data is re-identifiable in some capacity through the cross-correlation with other publicly available databases. Most of these have identified a well-known individual and found their data in the data set. They have not attempted to re-identify a complete data-set. This isn't that bad of a simplification as the risks of re-identification usually stem from an attacker wanting to know something about a specific individual. Latanya Sweeney, Ph.D, is a well known luminary on the topic and the good news is that she has been brought into HIT-Policy.

What has been missing is a quantitative analysis that would identify some scale for just how easy or hard this re-identification is, or how completely the re-identification is.We know just how long it takes to 'crack' encryption algorithms like DES and AES. Having a quantifiable rating for de-identification algorithms would be very helpful.
The Office of the National Coordinator for Health IT wants to test what it would take to re-identify personal health information that has been scrubbed of the digital identifiers  that link it to an individual person. More

Monday, January 11, 2010

Meaningful Use - Security Plan

Now that I have vented and thought more about the Meaningful Use IFR, let me address what I think should happen. Given that there is so little time between the final text of the regulation and the certification deadline in 2011:

First: Everyone should comment on the IFR. What ever your opinion is, now is the time to say it. Don't just send in nasty notes about how bad it is, be specific about your concern and what you want them to do about it. Your voice will be heard far better if it is specific about the actual textual changes.

Second: To ONC: Remove all of the security requirements. Yes, this would be BETTER than what is there today. I would rather have specific requirements that assured that two different parties that are obtaining a certified EHR have at least one way that they know they can securely connect (HITSP T17). It is clear that ONC doesn't think they can ask for this due to some market reason, so I would rather have security in the hands of HIPAA than the confusion that arise from poorly worded security requirements in regulatory text. HIPAA is a risk based approach that scales well with the actual risks and thus can better adapt to different settings and changes over time.

Third: To the EHR suppliers: Follow CCHIT security/privacy requirements. I really hate to say this as I am not that big of a fan of CCHIT, but they have a good set of functional requirements for security and privacy (yes, I was co-chair when the majority of them were written, so yes I believe in every security requirement. My aversion to CCHIT has more to do with their non-openness and some of their operational issues during certification). I know that many EHR products have achieved CCHIT certification (including GE's EHR products). I have no idea who has not achieved CCHIT certification. If we understood who these were, we might understand why ONC seems to want less security than CCHIT has asked for. I am not sure I would want to recommend to any Provider that they use a product that can't show that they have met the CCHIT security requirements. These seem like a reasonable baseline of 'functional security capabilities'.

Fourth: To the EHR suppliers: Get going on HITSP security/privacy specifications. Some of these requirements are hard to meet, start with the easy ones. I have written about the maturity of the HITSP specifications, and also a scalable approach that recognizes these standards maturity issues. The reason to implement the HITSP security/privacy specifications into an EHR has more to do about interoperability than security. A Provider can secure communications with many methods including physical isolation, private networks, VPN, secret-sauce. What the EHR vendor is doing when they implement the HITSP security/privacy specifications is provide at least one way that their product will interoperate with others. They are not forbidding other ways. Meaning that just because an EHR has implemented HITSP/T17 (mutually-authenticated TLS) does not mean that a Provider can't use a VPN, they can. But if the EHR vendor doesn't implement something that provides a secure communications channel then we force Providers to use 'third party solutions' like VPN. Yes, using a VPN requires working with a third party solution. It is common for large hospitals to have VPN solutions, and that is fine. But it is NOT common for a small office to be able to support VPN by themselves.

Fifth: To the Providers, HIE, Labs, Payers, and any other party that is 'operational': Use a Security Risk Management approach.  Security by checklist is a bad idea. ‘Security Theater’ is a bad idea. This starts with a group of people thinking about security threats; that is threats to confidentiality, integrity, and availability of any resource. For any security threat, the likelihood and impact is used to determine appropriate reaction to that threat. An additional benefit of the risk management path is that security threats can be evaluated together with patient safety threats. Often times a security threat does introduce a patient safety threat, but an important thing to avoid is mitigating an unlikely security threat with a technology that introduces a patient safety threat. As with any risk management there is never zero risk. NIST has a really readable guide: *** NIST SP 800-30 Risk Management Guide for. Information Technology. If not this, then pick one from a blog post by Jeremia Grossman: In absense of a security strategy

It is so important that we stop the churn and get going with something. Achieving Meaningful Use is a good thing. We will not get there if we don't get started. We should take a pragmatic approach. Take small steps that are most likely to produce the most Meaningful Use. Fear, Uncertainty, and Doubt have frozen progress long enough.

Tuesday, January 5, 2010

Meaningful Use clearly does not mean Secure Use

We have waited long and hard for the definition of Meaningful Use. We now have the "Initial Set of Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology" Interim Final Rule (IFR), and it is very disappointing. I will let Keith address the numerous content problems.

The requirements for security in the IFR are useless and dangerous. They don't reference any of the good work of HITSP or the HIT Standards committees. This makes me wonder why I spend so much time in open consensus development. On the one hand any good security plan can claim compliance, but so can someone that uses XOR as an encryption algorithm. In fact I am worried that this will give the impression that there is some form of safe-harbor if an organization meets the IFR yet has not done the risk assessment defined in the HIPAA Security Rule. The IFR does seem to try to not say this, but we all know how well people read the text vs the summaries written about the text.

I was expecting that the definitions in the IFR would be about interoperability and not functional requirements for an EHR supporting Meaningful Use. Turns out that the security requirements are all functional requirements, and not a complete set. CCHIT has far more specific and accurate criteria. To show a mapping would give credit to the poor wording found in the IFR. Ok, so a close approximation can be drawn and would be educational.  First point of clarification, one must start with the regulation text as that is the normative part. This starts on Page 111. Security fills Pages 119-120
  • 170.210 (a) - Encryption and decryption of electronic health information
    • I think they mean to apply encryption to data at rest and in transit, but this is not clear. The biggest problem here is their lack of standards references, thus anything can qualify as encryption (e.g., XOR with a 128 bit fixed key).
    • SC 06.01.1 - The system shall support protection of confidentiality of all Protected Health Information (PHI) delivered over the Internet or other known open networks via encryption using the Advanced Encryption Standard (AES) and an open protocol such as TLS, SSL, IPSec, XML encryptions, or S/MIME or their successors.
    • SC 06.03 - For systems that provide access to PHI through a web browser interface (i.e. HTML over HTTP) shall include the capability to encrypt the data communicated over the network via SSL (HTML over HTTPS). Note: Web browser interfaces are often used beyond the perimeter of the protected enterprise network
    • SC 06.06.1 - The system, when storing PHI on any device intended to be portable/removable (e.g. thumb-drives, CD-ROM, PDA, Notebook), shall support use of a standards based encrypted format the Advanced Encryption Standard (AES).
  • 170.210 (b) Record actions related to electronic health information
    • SC 02.03 - The system shall be able to detect security-relevant events that it mediates and generate audit records for them. At a minimum the events shall include those listed in the Appendix Audited Events. Note: The system is only responsible for auditing security events that it mediates. A mediated event is an event that the system has some active role in allowing or causing to happen or has opportunity to detect. The system is not expected to create audit logs entries for security events that it does not mediate.
    • SC 02.04 - The system shall record within each audit record the following information when it is available: (1) date and time of the event; (2) the component of the system (e.g. software component, hardware component) where the event occurred; (3) type of event (including: data description and patient identifier when relevant); (4) subject identity (e.g. user identity); and (5) the outcome (success or failure) of the event.
  • 170.210 (c) Verification that electronic health information has not been altered in transit
    • SC 06.04 - The system shall support protection of integrity of all Protected Health Information (PHI) delivered over the Internet or other known open networks via SHA1 hashing and an open protocol such as TLS, SSL, IPSec, XML digital signature, or S/MIME or their successors.
  • 170.210 (d) Cross-enterprise authentication
    • This one is very confusing. The regulation text is lacking any detail, so I need to refer back to the discussion above. In that discussion they reference IHE XUA (strange how they skipped over HITSP/C19). 
    • This kind of requirement is missing from CCHIT as CCHIT recognizes that further profiling of SAML is needed beyond XUA/C19. This is spoken of in the IFR comments, but not in the regulation text. The text would allow any method, presumably even a plain text string.
  • 170.210 (e) 
    • SC 02.03 - The system shall be able to detect security-relevant events that it mediates and generate audit records for them. At a minimum the events shall include those listed in the Appendix Audited Events. Note: The system is only responsible for auditing security events that it mediates. A mediated event is an event that the system has some active role in allowing or causing to happen or has opportunity to detect. The system is not expected to create audit logs entries for security events that it does not mediate.
    • SC 02.04 - The system shall record within each audit record the following information when it is available: (1) date and time of the event; (2) the component of the system (e.g. software component, hardware component) where the event occurred; (3) type of event (including: data description and patient identifier when relevant); (4) subject identity (e.g. user identity); and (5) the outcome (success or failure) of the event.
    • AM 30.06 The system shall have the ability to provide support for disclosure management in compliance with HIPAA and applicable law.
    • (and others)
All of the other SC requirements are also critical. For example: authenticating both ends of a transmission (SC 06.05) seems to be totally missing, meaning there is no assurance that you are sending PHI to who you think you are sending it to nor is there assurance to them that they are receiving it from who they think is sending it.

The new twist that is found in the IFR is relative to disclosures, and specifically that they expect that a ‘description of the disclosure’ must be recorded into the disclosure log. Most experts on the audit topic recognize that post-processing of the audit log is what is used to derive the ‘description’. Post processing is also mostly where the ‘purpose of use’ is derived which is used to determine if the event is relevant to an accounting of disclosures. It is this post processing that CCHIT had implied in AM 30.06.

Totally absent is Privacy, not even opt-in or opt-out Consent. I think that BPPC can and should be used to enable OPT-IN in an XDS like RHIO (HIE/HIO). This is not to say that BPPC is the long term solution, but without an OPT-IN ability we will get nowhere on deploying the Health Internet. BPPC is 'good enough', and is a logical path toward something more advanced. HL7 is already working on that more advanced solution, and it is a logical extension to where BPPC is today.

The optimist in me says that 2011 is too soon for anyone to make adjustments in what they had already started, so it is good that one can drive a ocean-freighter through this rule. The pessimist in me says that if people were not on the right track, this rule is going to do NOTHING to help them find the right track. So, for those that need the most guidance, HHS has given you NOTHING.