Tuesday, February 28, 2012

Suite of Network Infrastructure and Protocol Training courses

This just crossed my desk For more go www.gehealthcare.com/hcit
Yes, I work for GE Healthcare

From Essentials of Healthcare IT to Introduction to IHE Cross Enterprise Document Sharing Profile, our suite of Network Infrastructure and Protocol Training courses can help regardless of your facility's EMR Stage.
» Essentials of Healthcare IT» Essentials of DICOM» Essentials of HL7» Wireless in the Healthcare IT Environment» Securing the Healthcare IT Environment» Introduction to HL7 CDA 2.0» Introduction to IHE Cross Enterprise Document Sharing Profile
All designed to meet your needs regardless of your equipment choices.

For more information, visit us at:

Questions? Contact us today: edservices@ge.com or 888-799-9921

Saturday, February 25, 2012

ATNA auditing of CCOW context changes

On the IHE mailing list the following question came in: “Are there any commonly accepted or standard ATNA message structures for auditing CCOW context changes?

The short answer is that the CCOW transactions are not particularly ‘Security Relevant’. The thing that happened before or because of the transaction usually are. Those security relevant events that happen before or after should be recorded as ATNA events. For example it should be the User Authentication application that records that a user was authenticated. The actual CCOW transaction to set the context or to read the context is not that interesting.

Deeper Discussion:
For many other IHE Profiles (e.g XDS, PIX, EUA) we have identified the security audit message that would be appropriate to capture that the transaction happened. This is done now days as part of the Security Cookbook. Where we do a risk assessment to identify reasonable security and privacy controls, including if there is a need for a security audit event to be recorded.

IHE does have a profile that leverages CCOW, The “Patient Synchronized Applications” Profile was written around the same time that “Audit Trails and Node Authentication” Profile was written; but was written before the Security Cookbook. So one can see how it might be possible that IHE simply hasn’t thought about the relationship between PSA and ATNA; or hasn’t fully executed the Security Cookbook. I can’t say that this is not the case, but I don’t think that there would be a strong reason to define the ATNA events for PSA.

Not all transactions are going to be security auditable events, and more important there are far more security auditable events than there are network transactions. Far more security relevant events happen in the normal workflow of an application that have no external transaction. This is something that I have tried to cover multiple times, as many people get a false impression that the only ATNA events are those that are defined by IHE. The main security relevant events that IHE defines are ‘Import’ or ‘Export’ events. This is the reason why XDS was so highly covered with ATNA, as everything about XDS is either an Import or an Export event; and one that is most likely not just to the application but to the organization – hence Cross-Enterprise prefix.

Another factor is that the IHE ATNA audit message is about Surveillance, not forensics or even debugging. The statement that the CCOW transactions are not security relevant does not mean that there should be no log, but that log is more of a debugging log, something that might be called upon if deeper analysis is needed (forensics).

Who did what when:
As stated above the CCOW transactions are not that interesting, but the events leading up to a context change and the events caused by a context change are very interesting.
  1. User Authentication: This event should be logged by the application that actually authenticated the user. It is important that this responsibility be here, as it is more important to log failed attempts to authenticate. The failed attempts would never hit the context, so the context changes would not be helpful to detect an attack at the user authentication.
  2. Patient Selected: The application that is used to select a new patient should be recording that a new patient was selected. 
  3. Patient Changed: All applications that change their display because of the context change likely are showing the user something new, and thus there is a need to record that that new thing is being shown to the user.
  4. Object Changed: The CCOW specification allows for other objects within the patient record to also be changed and thus synchronized.
Gap in ATNA?:
It is at this point that a potential gap in the current ATNA specification comes into discussion. There is clear ways to indicate that the user is being shown a patient study, document, order result, or other specific object. There is not a clear way to say to simply say the patient identity has changed and that no-specific information is being shown.

The one case where this comes up and I think makes this a bit harder is the typical EHR/EMR case or the Nurse Station, where the first screen is a high-level view. I have heard this referred to as the ‘chart’. There is clearly information on this screen, but it isn’t a discrete object, but rather made up of the most interesting values in the EHR. I covered this before, and am still not clear what is security relevant.

If nothing in particular was shown to the user, was there really any security relevant event? This might simply be the case, that one doesn't record an ATNA event until there is some data shown to the user.

I don't think I have come up with a gap or a reason why CCOW events should be recorded using IHE ATNA. That is not to say that they can't be, just that I don't see a compelling reason to specify it.


Friday, February 24, 2012

FW: Invitation to Congressional Briefing on the Financial Impact of Breached Protected Health Information

I won’t likely be able to attend this. Please let me know if you will be able to attend. I would be very interested in the output.

Dear Colleague:

Data breaches of protected health information (PHI) are growing in frequency and in magnitude and are having huge financial, legal, operational, clinical, and reputational repercussions. Protecting this valuable health data is an essential part of business for all health care organizations and for quality health care.

On March 5, you are cordially invited to attend a congressional briefing – hosted by Congressional Bipartisan Privacy Caucus Co-Chairs Congressman Edward J. Markey (D-MA) and Congressman Joe Barton (R-TX) – that will unveil a new report and method to help organizations operating in the health care industry evaluate the financial impact of breached PHI and the business case for reducing those breaches.

Full details follow below; the attached publicity flyer may be freely shared.
Congressional Briefing: The Financial Impact of Breached Protected Health Information: A Business Case for Enhanced PHI Security

Co-hosted by Congressman Edward J. Markey (D-MA) and Congressman Joe Barton (R-TX), Co-Chairs of the Congressional Bipartisan Privacy Caucus.

Health care organizations are entrusted with safeguarding patient privacy and protected health information (PHI), but their security efforts are not keeping pace with the growing risks of exposure of PHI as a result of electronic health record (EHR) adoption, the number of organizations handling PHI, and the growing rewards of PHI theft. In order to sustain delivery of quality health care and ensure patient safety, the health care industry and its service providers require adequate processes and resources to protect PHI. 

The Financial Impact of Breached Protected Health Information: A Business Case for Enhanced PHI Security provides health care organizations a 5-step method – PHIve (PHI Value Estimator) to assess specific security risks and build a business case for enhanced PHI security. This tool estimates the overall potential costs of a data breach to an organization, and provides a methodology for determining an appropriate level of investment needed to strengthen privacy and security programs and reduce the probability of a breach. Armed with the information contained in this free report, organizations operating in the health care sector can take immediate action to commit the resources needed to head off the potentially devastating consequences of a PHI data breach.

§  Joe Bhatia, President and CEO, American National Standards Institute
§  Catherine Allen, Chairman and CEO, The Santa Fe Group
§  Larry Clinton, President and CEO, Internet Security Alliance
§  Rick Kam, President and Co-Founder, ID Experts, Chairman of the PHI Project
§  James C. Pyles, Principal, Powers Pyles Sutter & Verville PC
§  Lynda Martel, Director, Privacy Compliance Communications, DriveSavers Data Recovery
§  Mary Chaput, CFO and Chief Compliance Officer, Clearwater Compliance LLC

Monday, March 5, 2012, 12:30 p.m. – 1:30 p.m.
A simple brown-bag lunch will be served.

Rayburn House Office Building, Room Rayburn B-340
Independence Avenue and South Capitol Street
Washington, DC, 20003

Launches on March 5, 2012, at 10:00 a.m.: webstore.ansi.org/phi 

The event is open to members of Congress and their staff as well as all interested members of the press. RSVP to pr@ansi.org.       

Complimentary copies of the report and an accompanying media kit will be available at the congressional briefing.

Thursday, February 23, 2012

Simple and Effective HIE Consent

Explicit OPT-IN is most simple and effective Privacy Consent model for Health Information Exchanges (HIE). It can be managed and enforced local to where the data lives. Being Explicit OPT-IN it offers no surprise to the Patient, limiting the 'ick' factor.

Background:I am encouraged by many HIE efforts that are developing. John Halamka offered his opinion on HIE Consent Policy. Through some clarifications with him in comments on his blog article I understand and agree with what he is proposing. I have further refined the understanding in a Google+ discussion thread.

I have pointed at the Connecticut policies which are consistent. I understand that within the private walls of the Regional Extension Centers (REC) they are running a survey that is coming up with similarly simple yet effective system. I have also been involved in other HIEs doing similar.

The proposal is the most basic of Consent policies where the patient has control from the start. That is that nothing is shared with other organizations until the patient chooses to share (OPT-IN). It doesn’t stop there, but also includes ability to change their mind (OPT-OUT). The critical aspect is that it is a rather binary state, either you are sharing or not. 

Implicit vs Explicit:The main statement here is that the ‘default’ value is OPT-OUT. Meaning if you don’t know the state of the Privacy Consent, then you must assume OPT-OUT. In an ‘implied’ consent environment, like HIPAA the default value is OPT-IN, meaning the assumption is that the patient wants their data shared. The difference is important only at what the understanding is to start with. HIPAA is Implied Consent, the suggestion for an HIE is Explicit Consent. Once a state for Consent is known, the mechanism works the same way when the patient chooses to change their mind. The difference is simply what the default is to begin with.

I think it made sense in HIPAA to have implied consent, as the scope of HIPAA consent is for use within the Covered Entity. The explanation in HIPAA for this implied consent is that the patient has already chosen to walk into the healthcare facility, it is this act of walking into the facility that is their expression of Consent. Yes there is lots of argument that this should have been Explicit, but it isn't. I am just saying that I do understand the logic given in HIPAA.

I think it is equally logical for HIE to totally change that default behavior to an Explicit Consent. The Patient is not walking into the virtual HIE environment, they are visiting their care provider; thus it is right to ask the patient how far and wide they want their data available. When the patient isn't surprised, they feel more in-control.

Enforcement:A key to making this both simple and effective is to have the data holder enforce this control. Adding this factor greatly simplifies Consent Management, as it makes it totally a local problem. Local in this case meaning that the data holder is the only one that needs to know what the current status of Consent is. That is when someone asks for data about a specific patient, the data holder looks at the status of Consent and either returns the data asked for, or returns no data.

The requester doesn’t need to know the status of Consent, as they either get information or not. This is an important, but hard to understand, simplicity. This is a core part of the NwHIN-Exchange today. This simplicity is both elegant and effective across the various privacy landscape that makes up the USA.With something like NwHIN-Exchange or even NwHIN-Direct; this model is very easy to implement.

XDS specific enforcement:This enforcement gets a little harder when you have some central infrastructure that knows something about the patient, like found in XDS -- Registry. In this case one does need to think a little harder on what the Policy (OPT-IN and OPT-OUT) mean to this Registry. In the XDS environment one needs to determine if the central Registry should be blinding entries when the Consent is in OPT-OUT state. 

It might be that disclosing what documents are available is not considered a problem, but it might be.

Enforcing Consent at the Registry is not too hard, but does mean that each organization involved in the HIE must communicate to the HIE their current state of consent for that patient. And the Registry must determine for each source of data if it has an OPT-IN for that organization. This is not as simple, but still is rather simple. This is the primary domain where BPPC is used, as the flag indicating OPT-IN vs OPT-OUT.

There is also the question of if this same thing needs to be done with the XDS wide Affinity Domain Patient ID system. In the most formal form defined in XDS, this should not be necessary as the only thing one learns from a XDS Affinity Domain Patient ID is – given your patient ID, what is the Affinity Domain ID. This doesn't tell you anything about any data sources, just that the patient is known. The issue comes with some implementations of this is more like a Multiple-Patient-Identity cross-reference (PIX Manager). Meaning that what you get back is not just the Affinity Domain ID, but also all the cross-referenced ID values. This is not called for in XDS, but is often just done because the data is known.

Special Cases (Sensitive topics):There is some data that is so sensitive that it likely should simply not be shared until we get far more complex Controls designed and deployed widely. This doesn’t mean that they are completely not portable, but rather they should be shared very carefully. This can again be simply and effectively controlled at the data holder. When data is created a decision can be made as to if this data falls into the special topics, or falls into the normal clinical data.

I might even suggest that this kind of data is exactly the kind of data that calls for the Direct push model. Many people look at Direct and get angry that it has no Consent or Access Control built in; I will point out that it actually very explicitly pushed these as Pre-Conditions. Meaning that there is some decision PRIOR to pushing the data that determines if it is the right thing to do. In the case of Sensitive Topics this decision is clear and clean. What the Direct Project has going for it is that it is indeed a directed push from the one with the data to the one who needs the data (and has the legitimate authorization). The security model is strong, if somewhat hard to execute.

Break-Glass:The topic of 'break-glass' is very broad, but needs to be discussed. First, what is it that everyone thinks is Break-Glass. Please don't use the emergency room as break-glass; this may not be scheduled from the patient's perspective, but it is very controlled from the Healthcare Provider perspective. Access by people in the emergency room is simply, access by people in the emergency room. This is a functional role. It needs to be clear what rights this functional role has regarding OPT-IN and OPT-OUT.

Advancements:This simple and effective HIE Consent is NOT the final solution. It is proposed as a stepping stone. It is useful for a large number of patients. There is plenty of standards work going on now to define better solutions. The good news is that these solutions are building on existing standards. Both standards we are using today for Consent, but also standards that are used in other industries. Healthcare is special as the data is very personal about the patient; and the patient has a strong right globally to control their data.

Conclusion:Start sharing using an Explicit OPT-IN environment. Where the OPT-IN is gathered and enforcement at the data holder location. Stepping stones will advance the state-of-the-art.

Tuesday, February 21, 2012

HHS/ONC - Mobile Devices Roundtable: Safeguarding Health Information

I expect this to be a chance for existing 'good practices' to be shared and documented in a way more consumable by healthcare providers that have little IT support. I totally expect that large healthcare providers don't need this guidance, but I could be wrong. I wish I could attend, but will likely be watching form the sideline. The notice went out last week for a meeting March 16th.
Background: One of the key goals of the Federal Health Information Technology Strategic Plan is to inspire confidence and trust in health IT and electronic health information exchange by protecting the confidentiality, integrity, and availability of health information. ONC’s Office of the Chief Privacy Officer (OCPO), along with the HHS Office for Civil Rights (OCR), recently launched a privacy and security mobile device project. The project builds on the existing HHS HIPAA Security Rule - Remote Use Guidance and is designed to identify privacy and security good practices for mobile devices. The identified provider use case scenarios and good practices to address those scenarios will be communicated in plain, practical, and easy to understand language for use by health care providers, professionals, and other entities. 
Roundtable Purpose: To gather public, industry, and subject matter expert input that will help inform the development of an effective and practical way to bring awareness and understanding to those in the clinical sector regarding securing and protecting health information while using mobile devices. 
Roundtable Objectives:
  • Address the current privacy and security legal framework for mobile devices accessing, storing and/or transmitting health information;
  • Discuss real world usage of mobile devices by providers and other health care delivery professionals to understand their expectations, attitudes, challenges and needs;
  • Gather input regarding the information (and format) providers and other health care delivery professionals want and need to help them safeguard health information on their mobile devices; and
  • Gather input on existing and emerging privacy and security good practices, strategies and technologies for safeguarding data on mobile devices.
My overall my answer is, that mobile devices are not different than any other. Mobile Devices are just more likely to get lost or stolen (for pawn). It is this increased likelihood (of known risks) that needs to be considered. Thus good application design keeps sensitive information off of the device. Since this is a USA domain, it is quite easy to point at NIST who have excellent guidelines on this topic:
  • NIST Guidelines on Cell Phone and PDA Security SP800-124.pdf
  • NIST Guide to Storage Encryption Technologies for End User Devices SP800-111.pdf
  • NIST Recommended Security Controls for Federal Information Systems and Organizations SP800-53-rev3-db
The policy, methods, and technology used to protect a mobile device is common place in IT security circles. There is little specifics to Healthcare. There should NOT be much specific to healthcare. Healthcare should re-use as much of common IT security as possible. I always encourage a Risk Assessment/Management approach, just like is the basis of HIPAA Security. This is the best approach to reasonable application of security technology according to risk Impact and Likelihood.

The best approach to a mobile device is to keep PHI off of it. If no PHI is on the device then you have just lost a piece of technology (Presuming you still control the access-control path to the PHI). This is not easy to achieve or even reasonable expectation, but with good software design it is possible to make it really hard to 'save' data onto the device. 

If you save PHI onto the device, then you must take on the responsibility of protecting it there. This means access controls - to the device; and likely encryption technology on the storage in the device. Yes, both are needed. 

What is not clear in the HHS/ONC initiative is if they are talking about general-purpose mobile devices or special purpose 'medical devices'. Not much changes, but some critical things do change. Such as control of the configuration. In the case of a medical device there is joint control, as the Breach Notification obligation is in the Covered Entity hands; but the safe-and-effective obligation is in the Medical Device vendors hands. This is the topic of  Encryption is like Penicillin.  

Monday, February 20, 2012

Encryption is like Penicillin

There is a renewed interest in broad application of encrypting all information that is stored anywhere. This drive arises from some highly visible cases where large amounts of Patient Identifiable Health Information was lost or stolen. The USA Breach Notification have almost double the previous year. Many state and federal laws require various levels/types of Breach Notification to impacted individuals, government authorities, and even the public at large. These Breach Notification laws often exclude the requirement for notification when the lost or stolen Patient Identifiable Health Information is stored in encrypted form. This exclusion creates a strong incentive to encrypt all data at rest.

I am not going to advocate against encryption, it is a wonderful security tool. I am going to advocate for using this tool appropriately. Like Penicillin, it must be used to fight only the things that it can fix and should not be overused as it has side-affects and unintended consequences. I speak often about using Risk Assessment/Management to address security and privacy; it is the logic tool that security experts use to identify what is truly a risk worth addressing, and what is the appropriate response to address that risk.

In Healthcare, care givers use a form of Risk Assessment/Management to diagnose and treat as well. This is why I want to bring up the imperfect analogy that Encryption is like Penicillin. The application of Encryption to address a security risk is a form of prescription, but much like Penicillin, Encryption is not the cure-all. Encryption, like Penicillin, works for specific types of problems. Encryption, like Penicillin, when used improperly can make the overall health worse - (see yellow sticky on computer monitor for the encryption key). Encryption, like Penicillin, needs to be applied carefully after assessing the risk. I could say that Encryption, like Penicillin, when used too much becomes ineffective; but I don't know exactly how to get to that conclusion. Encryption, like Penicillin, is ineffective when not fully applied correctly (poor key management vs stopping taking pills when feel better). There are cases where applying Encryption, like Penicillin, would cause greater harm. I am guessing I could go on and on; but I am way beyond my understanding of Penicillin.

Encryption is seen as a cure-all, like Penicillin was decades before. We have gotten smarter at applying Penicillin today, we need to apply as much smarts at applying Encryption. So, please apply Encryption appropriate to the risk that it is solving. The Breach Notification legislation giving a 'get out of jail free' card to those that have applied encryption doesn't help. Meaningful Use should add clarity that a well designed system that never puts PHI onto a portable device is just as good as applying encryption to a portable device because it has PHI needing to be protected. Yet we need to recognize that sometimes it is appropriate for a portable device to hold PHI and it not be encrypted; in these cases strong physical controls are needed. Recognizing that there is a risk that the strong physical controls may fail leaving no fallback protection.

The analysis is needed, Risk is never brought to zero.

The following is a deeper White Paper I wrote a few years ago that never got published.  It focuses on the topic applied to formal Medical Devices, where there is a clear intended use and patient safety concern. This is just as applicable to informal Mobile Health devices:
This white paper recommends a balanced means for appropriately selecting the security technology in support of regulatory compliance. In many cases, it is consistent with a devices intended use that removable media or hard-drives in portable devices should be encrypted to provide the right mitigation for the risk of loss of that device. However there is also a case to be made where a mandate, applied too broadly, can adversely impact the specialized Medical Device in a manner that reduces its safe and effective use for its intended purpose. Consistent with the goals of health care delivery, the design and operation of a Medical Device gives top priority to mitigating the risks to Patient and Operator safety. Such design considerations combined with the environment of use sometimes leads to technical security controls that are less stringent than that found on a general purpose laptop. This is a common perception among policy makers, however applying security technology appropriately can produce more secure systems when the system is a dedicated device, and not a general purpose laptop, since it is much easier to define the use case scenarios and mitigate the risk

For most of its practice, health care has relied on printed records on paper and film to facilitate workflow during diagnosis and treatment. The security and privacy of paper records systems have relied on care givers' professional oaths, physical security, and the reality that it is time consuming and hard to conceal unauthorized record duplication and dissemination. The emerging and maturing Electronic Medical Records (EMR), Electronic Health Record (EHR), and Health Information Exchange (HIE) provides far more opportunity for access to the data, both authorized and unauthorized. New diagnostic and treatment technologies plus the introduction of inexpensive means of storage, communication, and manipulation of medical data has further increased the volume of information managed.

Physical controls are less meaningful and one of the important advantages is their ease of transfer and viewing. Thus, the electronic form of health records are, perhaps too often, easily moved onto portable devices such as iphones, ipods, tablets, USB-Memory, or CD/DVD, or even portable computers. The reasons the data are moved outside of a controlled EMR and onto a portable device are many, but the primary push seems to be ease of use and collaborative workflow and even expedited quality monitoring.

Breach Notification Regulations
There has been a large increase in states and countries that have developed Breach Notification laws and regulations. This began in 2002 with California, and, in the US spread to 46 states in 8 years. In general, these laws apply to finance- and ID theft-related personal data but starting with the 2009 California law, medical data was included as a type requiring mandatory breach notification. In the USA, Personally Identifiable Information (PII) is a term often used to identify this broader set of information that includes: Financial data: SSN, credit card, bank account, insurance ID, etc.; and/or Healthcare data: patient status, diagnosis, etc. Massachusetts went beyond the California reporting statutes to actually legislate the required security controls around PII. The original proposed law had a strong focus on encryption, but the enacted law (MA 201 CMR 17.00) included risk analysis to allow for scalability of the solution to the realities of the risks without explicitly mandating encryption.

After the aggressive but voluntary breach notification campaign by the United Kingdom’s Information Commissioner’s Office, Germany became the first European country to enact mandatory breach notification effective on September 1, 2009 (the July 10, 2009 amendment to the Federal Data Protection Act – FDPA). There is a great deal of discussion in the EC about adding breach notification to the European Commission’s Privacy Directive (95/46 EC).

The USA Federal medical privacy regulation known as HIPAA was updated in January 2010 with the HITECH Act which includes explicit requirements for privacy breach notification. It amplifies the broader applicability the security controls listed in HIPAA including encryption for electronic protected health information (HIPAA 164.312(2)(iv)). However, it does this as an addressable technical control that is one of many selected in managing the risks associated with PII. In addition to this regulation, The USA Department of Health and Human Services (HHS) issued a Final Rule on breach notification that included notifying HHS directly of breaches of more than 500 individuals’ records. The final rule came out in July 2010; and we now have the HHS wall-of-shame. This is a very powerful force for change.

Canada includes Breach Notification in the Personal Health Information Protection Act (PHIPA). Although not specifically mandating encryption by law, some authorities, such as the Ontario Information and Privacy Commissioner has ordered provincial health units to to encrypt personal health information on mobile devices. Similarly, the United Kingdom’s National Health Service’s Chief Executive has directed that all transfer of PII or storage on portable devices be protected with encryption. So, some health organizations are mandating encryption but to date, law and regulation has favored applying risk analysis to properly balance the cost and performance impacts of encryption with the potential benefits and harm.

Almost all of the laws and regulations mandating breach notification permit an exception to notification the Patient Identifiable Information is unusable, unreadable, or indecipherable (as with encryption or de-identification). However, when a breach does occur, it is an embarrassing and potentially expensive situation that is complex to manage.

Of course, there is careful thought behind allowing local organizations to conduct their own risk analyses to determine their own policies with regard to data protection (including encryption). A proper risk and cost balanced conclusion about security controls can only be made by qualified staff familiar with the institutional mission, the dangers associated with the provision of heath care and the real damage caused by PII breaches. Balancing safety, effectiveness, security, and privacy must be done in the face of an evolving and expanding legal, health, and threat landscape. The dictum “first do no harm” applies strongly here and careful consideration of harm must come to bear.

Ways data gets exposed
To understand what a Breach is, let’s examine the ways that data can be exposed to unauthorized access.

Focused and funded hackers are after very specific information. They will likely use a wide variety of methods to get access to the information or resources they seek. The case of encrypting data at rest only protects against this threat if the access gained is not a normal network interface. Even in those cases requirements such as those for medical emergencies require the medical device always be available for its intended purpose. In those cases the self authentication of the device on boot into the application can provide a hacker direct access to the information they seek. This however could be mitigated by physical controls.

There are less focused attacks with automated tools. In this case the attacker is opportunistic, often times not even interested in the information but rather wanting a computing device with an internet presence. Again the relevance is related to the type of access granted.

A sample of the breach notices present at the HHS/OCR Breach Notification web site reveal very few instances where hacking was used, which resulted in a moderate number of patients exposed. This is an unacceptable value, but we will see that it is very small in comparison.

Unauthorized Access
In this case someone was given access under a liberal access control policy but was caught, through audit logs, accessing records that they should not have been accessing. Some examples of this:

  • Coral Gables Couple –selling patient data to personal-injury lawyers
  • Lawanda Jackson [UCLA Med] – multiple celebrity patient
  • George Clooney – 2 dozen employees inappropriately accessed the records out of curiosity and were disciplined

When we look at the HHS/OCR Breach Notification web site, we see that this threat resulted in a handful of organizations needing to Notify, which resulted in about tens of thousands patients exposed. These cases tend to be highly visible as they are usual sensational or involve VIP individuals.

Accidental loss
In this case we find cases like a laptop being left in a taxi, or a memory-stick being left in a suit coat at the cleaners. Often times these cases are true accidents, and it is unlikely that the resulting lost data are actually used in a way that causes harm. However, there is a chance that the data exposure might lead to harm so the Breach Notification must be triggered.

When we look at the HHS/OCR Breach Notification web site, we see that this case results in a dozen entries, exposing a hundred thousand patients. This is clearly a case where encryption would mitigate exposure from these inherently portable devices.

Theft of a Portable Device
In this case we have outright theft by an attacker. These cases are distinct from a Hacker because the thief physically takes device. It is not clear in these cases if the thief is interested in the data, or just wants the valuable device to pawn for cash. Some cases:

  • Laptop
  • workstation taken from corridors
  • medical-devices that incorporated computers
  • systems taken from loading docks
  • systems taken from shipping services

There is an extreme case of physical theft where BlueCross BlueShield of Tennessee experienced a breach 1 Million patients’ records in a theft of 57 hard-drives from call center

Exposure carries level of risk of data exposure.
In all of these cases the breach carries some level of risk of exposing patient identifiable information. Of course, lacking patient identifiable information there is no risk of exposure. Risk exists because there is a potential for harm to the affected individuals. This harm may be realized by damage to reputation, personal dignity, future healthcare, finances, and peace of mind. Many of these arise from crimes relating to identity theft such as financial fraud or health fraud. Of course, these risks occur in a context of the benefits provided by health care. All analysis of “acceptable risk” must be accompanied by a clear understanding of the benefits of the technology, the risks of compromise to confidentiality, integrity, and availability of data, and the incremental risks to other facets of the healthcare process while trying to minimize a threat to any one element.

For more details on how to balance all in the process of Risk Management see any text on risk management such as the USA National Institute of Standards publication NIST SP 800-30 Risk Management Guide for Information Technology.

The fix needs to be proportional to the risk
Mitigating any risk needs to be informed by benefits, costs, policy, physical environment, procedures, and other factors. The solution needs to be carefully considered so that it doesn't introduce new risks, such as a patient safety risk, or machine performance degradation risk. Encryption of Data-at-Rest is not always the preferred path. Considerations include:

  • Is there patient data on the device?
  • How much patient data on the device?
    • Federal rule does not require notification for less than 500 patients exposed. State rules may be stricter.
  • How easy is the patient data to find and/or identify as such
  • What value does the patient data have (low grade data or high value data)
  • Are there other controls in place to mitigate the risk
  • Is the device use critical to life
    • Bedside monitor in an ICU is critical is visible to care takers at all times
    • Ventilator must perform job consistently
    • Defibrillator must be useable by anyone
  • Is the device physically secured
    • MRI bolted to floor in isolated room with double doors
    • Servers in data-centers with key-entry locks

There are positive benefits of Encryption:

  • Unanticipated loss of physical device is protected
  • Decommissioning of physical device (hard drive) is less of a concern as encrypted data doesn't need to be erased
    • DoD does not consider encrypted HD sufficient
  • Others as defined in NIST 800-111.

There are problems introduced by Encryption

  • system performance reduction may introduce unintended operational problems
    • are there costs associated with keeping performance adequate by cost-incurring upgrades to processor, disks, transfer speeds, etc.
  • Changes to Workflow to login at startup
  • Loss of encryption key results in destroyed data (non escrowed keys)
  • Catastrophic emergency situations, e.g.,. Katrina; harder to give volunteer health providers access to the device

Again, through structured risk analysis, these factors can be examined and balanced.

Health delivery organizations must build compliance programs meeting the needs of law, regulation, policy, contract, and ethics. In reaching security control decisions (e.g., encrypting data at rest), we needs to consider the intended use of the device, physical environment, user, performance, fidelity of data, data volume, and cost.

Generally for mobile devices (laptops, PDA, USB-Memory, backup-tape, CD-ROM, DVD, etc) that store PHI, encryption that follow the NIST 800-111 guidelines would be well advised. There are standards available to support encryption at the Document or Media level. See the IHE Document Encryption.

Some devices and workstations are designed to not store locally or limit local storage to cache until the data is offloaded to formal storage. In these cases there is little or no PHI to be exposed if the device is lost or stolen.

For devices that are physically secured (MRI, CT, CathLab, PACS, EHR Servers, Data/Application Servers) in staff-controlled environments, it is not clear that the performance and equipment costs are proportional to the actual risks.

In general, medical devices provide substantial benefits in the diagnosis, treatment, and monitoring of disease. Protecting the PII managed by these devices is essential in the provision of health care. Decisions about the specific means to protecting that data must take into considerations the safety of patient and operator, the effectiveness of the organization’s health mission, and the security of data and systems managed. There is no one solution that fits all potential threats – a carefully considered balance is necessary.


Friday, February 17, 2012

NwHIN Exchange -- Impressive success

The NwHIN Exchange continues to progress, and is now released from some of the previous shackles. I know that this is posted on the HHS site - Nationwide Health Information Exchange. I just think that more people need to recognize just how BIG this is.

Current Participants (as of 12/05/2011):
There are plenty of organizations that have expressed interest in joining. Right now the queue is simply too much work to get through. They are onboarding as fast as they can. I totally agree with being very careful to assure governance is in place. From what I know the list includes:
Organizations in Process of Joining
  • Alabama’s One Health Record
  • Alaska HIE
  • Catholic Healthcare West
  • Central Alabama HIE
  • Conemaugh Health System
  • Idaho Health Data Exchange
  • Indian Health Services 
  • Indiana State Department of Health
  • New Mexico Health Information Collaborative (NMHIC)
  • Louisiana HIE
  • Medical University of South Carolina (MUSC)
  • National Renal Administrators Association (NRAA)
  • North Carolina HIE
  • Pensacola
  • Quality Health Network
  • Redwood MedNet
  • Health Information Partnership for Tennessee (HIP-TN)
Further, there are Regional Exchange Centers (REC) that are using the same technology. And some of these have been connecting while they were waiting for approval into the NwHIN-Exchange. Look for signs of this at the HIMSS conference next week. You will be surprised at just how far maturity has gone. 

I am so glad that ONC has finally recognized the NwHIN-Exchange. I hold out hope that in the next hours or days we will receive Meaningful Use stage 2 rules that recognize this as well. I like Direct, but I only like it for the original scope of replacing a FAX machine for very small doctor office. A mature system that protects integrity, privacy, and security is far better; and this is what we built into the NwHIN Exchange. An extensible system that takes care of 80% today with room for expansion.

Thursday, February 16, 2012

A Bad Randon Number Generator will produce Bad Security

The NY-Times ran an article this week that has caused much stir in crypto circles and all over the internet. The short conclusion is that a bad random number generator will produce not random numbers. This has been known for decades, creating good random numbers is hugely impossible yet critical for good security.

What bothers me most is that the 'researchers' choose to title their paper in a very political way. The problem they found is in poor implementations of generating random numbers, yet their title concludes that one cryptographic methodology is better than another -- yet both need good random numbers. Yes RSA exposes the bad random number longer. Yes RSA is hit more. But RSA is not the problem, key creation and management is the problem.

It is very important to get as good of randomness as you possibly can, and there is no way for a general purpose computer to produce perfect randomness. There are plenty of ways to get randomness through add-on hardware or indeed the trick of using a camera focused on a lava lamp.

The problem with bad random numbers starts when a Digital Certificate is created. Indeed when you make a request for a certificate there are two paths to take.

  1. You can generate the keys yourself, and thus just expose the Public key to the CA for signing. This has the drawback that the randomness is only as good as your system can produce. 
  2. You can request that the CA/RA create the keys. A good quality CA/RA 'service' will have good quality randomness (A bad CA/RA isn't worth dealing with). The disadvantage is that the CA/RA has your private key, but they can produce new certs totally without your knowledge too, so there is plenty of risk if you don't/can't trust your CA/RA.

Does this make any past certs invalid? I would say NO. It is still sufficiently difficult to re-create the key-pairs through the method used in the paper. Yes if the attacker is motivated, they can pull it off. The risk is that your system is one of the 1% that are bad.

External References:

Feb 17th:
Corrected a mistake in path 1 of certificate request. When you generate the keys and send a certificate request you are NOT exposing the private key, just the public one. Hence why this is often considered the better path, but as the paper suggests this is only better if you have good randomness.

Also noteable is that new research seems to be showing that the bad keys found might be coming from embedded IT devices like routers where they are generating keys for VPN capability. (See the "Mind your Ps and Qs" in the external References).

Wednesday, February 1, 2012

Universal Health ID -- Enable Privacy

I enjoyed reading the Wall Street Journal article “Should Every Patient Have a Unique ID Number for All Medical Records?”, at least until I got to the section by Deborah Peel. I respect Deborah as an advocate for Privacy, but her argument against Universal Health ID is a complete non-sequitur. Deborah says “But a universal health ID system would empower government and corporations to exploit the single biggest flaw in health-care technology today: Patients can't control who sees, uses and sells their sensitive health data.”

I added the bold on the words “empower government and corporations to exploit” as this is the part that is totally FALSE. There is nothing in having a universal ID that ‘empowers’ anyone. In fact one of the struggles that I am faced with in writing Privacy standards is that there is not a solid patient identifier that I can apply to Privacy Directives and Privacy Policy. This concept that having a universal ID empowers exploitation is totally wrong. What is empowering the exploitation today is that there is no way to determine what policies apply to the data. Therefore the default policy could just possibly be ‘exploit away’.

Without a solid link between the policy, patient, and data; there is no control. I want to enable the patient to control their data, for that I need to know who the patient is. The thought that healthcare organizations would never keep your data, and always transfer it to a PHR, is simply not going to happen in the USA due to many many rules including medical licensing, public health reporting, disclosure, and malpractice. We need to get over the failed attempt to change. This doesn't mean the PHR doesn't have it's place, I believe it does hold a strong role as a peer on an HIE. I just see controlling the patients data as being something that needs to be addressed Universally. For that we need strong identifiers, strong policies, and strong data management.

I have written on Patient Identity Matching, this is the process that is being used today. It is an error prone process, and worse it requires that everyone share the patient demographics in the most exacting detail they possibly can, and that centrally there is a database of all of the shared demographics. This is MORE of a privacy violation than if the central core needed to only hold Patient ID values, where a Patient ID value is an opaque string of numbers uniquely assigned to that patient by an assigning authority (binding both the identifier and the identifier of the assigning authority – results in a unique value).

The first section of the Wall Street Journal article, written by Michael Collins, hinted at this. I won’t bother hinting. The ramifications of NOT having a universal ID is that we are FORCED to expose high fidelity patient demographics. Even if we are using a PHR, even if we are using Direct Push. We MUST fully describe the patient in order to make sure we are dealing with the right patient. 

Note that Patient Safety will eventually come into the picture, as ultimately before the patient is treated they need to be highly identified, using their Universal ID alone at treatment time is simply not “Safe”. – For one, we know that people share insurance ID values so that their treatment is paid for.

We do NOT need a single Universal ID: especially not a single assigning authority. All we need to do is determine a set of assigning authorities that are considered ‘good enough’. When I say ‘good enough’ what I mean is that the assigning authority has processes in place to positively identify and prove that the human they are assigning an identity to is really that specific human. We know of some of these ‘assigning authorities’ already: Passport, and Driver’s License. Yes, these are non-healthcare identifiers; but if you have one then you should be able to use it. Many states are starting up mandatory Voter identity systems, these likely are going to be ‘good enough’ too. More likely is to simply use the identifier assigned by your GP, or Your Insurance. Fact is we don’t need to have a pre-determined list of assigning authorities, each facility can determine what is ‘good enough’ for them; yes it would be nice if there was a starter set already proofed.

How are these used? Simply, they are entered into the Patient Identity Matching as a ‘high assurance’ identity with the assigning authority value. Thus they can be matched directly, bit-for-bit. 

Not that any system MUST recognize that any ID value can be revoked or replaced. Thus there is a need to keep old ID values in a cross-reference. This is another reason there is no 'single' health ID; and there will likely be multiple over time, even if things are always wonderful for the patient.

Once this is done, we end up with a really cool thing. The patient can choose their own Voluntary Patient ID; likely their PHR address. Yes, this is enabled by recognizing the use of IDs as a binding between the unique value assigned and the identity of the assigning authority. You all see this daily, when you use an e-mail address. Globally unique, because the first part is your identity the second part is the identity of the assigning authority. In these cases, the assigning authority is likely not highly trusted, but if the patient trust them then they are likely trustworthy.

Patient Privacy is enabled when we have strongly assured Identifiers. We don't even need to invent a new system. We just need to use the identifiers that we have already. It would not hurt to have a new system of trustable opaque identifiers that support federation.