Friday, September 23, 2016

Mobile Health Cloud vs Privacy Regulations

There is some strong discussion going on at HL7 around privacy concerns, especially now that HL7 FHIR has enabled easy application writing.  The discussion started with an article "Warning mHealth security fears are opening doors to app and device innovation" summarizing a study done by Ketchum.  There is concern that applications are being written by people that might not be as mature in the knowledge of how important Privacy is in healthcare.
  • There are concerns that new regulations will stifle innovation. I disagree...
  • There are recommendations that broader healthcare regulations are needed. I disagree...
  • There are concerns that identifiers for patients will be bad for Privacy. I disagree...
  • Some indicate that application developers don't care about privacy until a breach puts them in trouble. I disagree...
Let me explain my disagreement... I will also say that I agree with these concerns, just not in broad terms.

This problem of mobile-applications and Privacy is not unique to Healthcare. It is the scope of HL7, so understandable to be focused on it there. I point this out because from a Privacy and Security perspective we are far better off solving the problem together with all domains, than trying to solve it uniquely for healthcare. Healthcare does have some unique issues, like that the data can't be revoked or recalled.

The issue is somewhat unique to the USA, because of the extreme fragmented Privacy regulations. Although we do have HIPAA, GINA, 42-CFR Part 2, and many state augmentations. This patchwork of privacy regulations makes it very hard to understand the requirements, only very large organizations have the legal resources to untangle this all into one concept.

Privacy regulations are not important to instruct application writers on how to do the right thing.


Many application developers want to do the right thing so they gain access to Privacy-by-Design, and other Privacy Principles. These application developers design Privacy into their application, and thus Privacy does not get in the way.

Privacy regulations are important to deal with the application developers that don't try to honor Privacy; or those that actively thwart Privacy. Regulations are needed so that bad behavior can be detected, and prosecuted. Don't focus on Regulations to drive the right thing, look to them to prevent the wrong thing. In a perfect world there is no reason for regulations. A perfect world is where everyone wants to do the right thing for their peers, and have full resources to figure out what that right thing is. We don't have a perfect world... yet.

Mobile applications and the cloud are not limited by physical boarders, so they really need to look at the world. The problem that we have in the USA, is the same problem at a global scale. There is a huge patchwork of privacy regulations globally. The solution is the same, put Privacy first. Use Privacy-by-Design and other Privacy Principles. Make your application the best Privacy supporting application, and it will work everywhere (everywhere that governments themselves don't thwart privacy principles)

Build Privacy in from the beginning and it is not hard to do nor will it take away from a good user experience. Hack it on later and it is surely going to be problematic.  Apple is a good example of building Privacy in by design, and they have few (not zero) issues. Where as Facebook is a good example of hacking privacy on later, although they pushed through the hard part and are much better now.

The CBCC workgroup in HL7 is trying to do their part, they are creating a Privacy Handbook that all HL7 workgroups can use when they create new standards to assure that any Privacy Considerations are handled either in the standard they are creating or explained to the reader of that standard. This same thing is done by W3C, IETF, and OASIS; so we are solving the problem together with those domains.

If you can't protect the data, then don't collect the data.

Other Privacy topics covered on these articles.

Thursday, September 8, 2016

HL7 ballot:Guidance on Standards Privacy Impact Assessment

The CBCC workgroup has published a 'handbook' for comment in the current HL7 ballot. This handbook is to be used by the workgroups within HL7 for the purposes of producing HL7 standards that have 'considered' privacy. The expectation is that when a standard has considered privacy, it will be more easy to assure privacy when it is implemented.

Fortunately this is a first draft, and a draft for comment... so one hopes that major changes can be done.

I have voted negative with a three dozen comments, mostly negative. The problem this handbook has is that it is asking an HL7 workgroup, while they are writing an interoperability standard, to do a Privacy Impact Assessment, using Privacy by Design. These are great tools, but are tools that are focused on an operational environment. Trying to apply them to the design of a HL7 interoperability standard is impossible, or at best too difficult.

Which should have been obvious to the authors of this HL7 SPIA, given that the conclusion of each of 10 steps is to write into the target specification the same boiler plate text to follow regulations. This should have made it very clear that they were using the wrong tool for the job.

I recommended from the start of this project, and my negative comments reflect this, that HL7 follow the lead of IETF and W3C. They have an approach that supports PIA and PbD; but is cast into actions that an interoperability standard developing workgroup can properly execute. They use terminology that is understandable, or well defined. They have reasonable steps, and reasonable activities.  

HL7 is a standards organization, we expect the standards we produce to be used. We expect that the healthcare domain will not ignore HL7 and invent their own solution. Thus as a standards organization, we should look to other standards organizations that have already created standards that are applicable, and USE THOSE STANDARDS. Why are we re-inventing what IETF and W3C have already produced?  I think it is fully appropriate that we cast their text into terms that the HL7 community uses, however even that gap is narrowing with FHIR.

Reference:

Tuesday, September 6, 2016

Looking for career opportunity

As some of you know, I am currently exploring new career opportunities. Who best to reach out to than those who understand and are interested in what I do through following my blog. Topics such as Privacy Consent, Access Control, Audit Control, Accounting of Disclosures, Identity, Authorization, Authentication, Encryption, Digital Signatures, Transport/Media Security, De-Identification, Pseudonymization, and Anonymization..In the spirit of good networking I'd like to share my thoughts and objectives for my next adventure. Any thoughts, feedback, suggestions, or contacts would be greatly appreciated.

I seek to be considered for an Interop Architect, Interop Program Manager, Standards Developer, Privacy Architect, or other similar leadership position that allows me to continue to engage with International Standards development while directing one or more teams in the implementation of those standards. My philosophy is that Interoperability Standards are not a destination, they are a catalyst that enables something far greater to happen. Privacy is not a encumbrance, but an enabler of something far greater.

I have over 30 years of experience with IT communications, including 18 years of expertise in Healthcare Interoperability Standards and the application of Privacy. I have worked closely with product development teams working on small medical devices, big medical devices, health information systems, and cloud workflows combining all.

I am especially excited about the latest standard from HL7 - FHIR. The FHIR standard leverages modern platforms and interaction models. It models the healthcare data-model using XML or JSON; and interaction-model using http REST.

I currently hold a co-chair position in HL7 security workgroup, as well as a leadership position in HL7 FHIR Management Governance. I am recognized as a leader on the topics of Privacy, Security, and Interoperability in DICOM, IHE, and HL7.  I wish to continue with my engagement with HL7, IHE, and DICOM standards organizations. Interoperability standards allow for the best re-use of technical implementations. These standards set the basis upon which we will add-value.

I have significant experience interacting with government bodies to help them with the evaluation of Interoperability Standards, and the writing of regulations to improve healthcare. I was a member of the HIT Standards - Privacy and Security workgroup, Direct, HITSP, and CCHIT before that. I have influenced USA regulations such as HIPAA, and Meaningful Use; as well as regional regulations globally. I am a member of the Wisconsin HIE technical advisory committee, and provide technical advice to the USA national eHealth Exchange. I have advised HIE implementations in Saudi Arabia, Italy, France, EU, etc

I am a true believer that Privacy + Interoperability are not just equal to the sum of the parts; but enable something greater than could ever happen without them.  I openly and eagerly advise and encourage through 7 years of  blogging,

See my Resume/CV on LinkedIn https://www.linkedin.com/in/johnmoehrke

Comments, Suggestions, Recommendations are welcome. I don't expect my readers have job opportunities sitting there waiting. However I do expect that you might know someone who knows something is happening...

PS. It appears I am going to miss the September HL7 meeting in Baltimore. This is my second miss in a row due to not having an employer.  This is sad for me as I look forward to being able to interact with my peers face-to-face.

PPS. I am not a "Security Architect". I love the security architects, they do a hugely important service for Privacy. I just don't find the kind of focus on defense to be fun. I am far more interested in enabling the right use of data (Privacy), than trying to stop the mass of maleficence.

PPPS. Happy birthday to my blog... now 7 years old.

Monday, August 29, 2016

Blockchain and Smart-Contracts applied to Evidence Notebook


Moleskine notebookThere is a need where an individual or team needs to record chronological facts privately, and in the future make these facts public in a way that the public can prove the integrity and chronology.  Where the chronological facts need to be known to within some timeframe, typically within a day. Where the sequence of the facts needs to be provable. Where a missing recorded facts can be detected. Where an inserted fact can be detected. Where all facts can be verified as being whole and unchanged from the date recorded. Where all facts are attributable to an individual or team of authors.

Description


These proofs are used to resolve disputes and prevention of fraud. Areas like in intellectual property management, clinical research, or other places where knowing who and when in a retrospective way is important. Aka: Lab Notebook, Lab Journal, Lab Book, Patent Notebook. Here is an image from the Laboratory Notebook of Alexander Grahame Bell, 1876.,

File:AGBell Notebook.jpg

Historically, tamper-evident notebooks provided assurance of data provenance with clear chronology. Sewn bindings and numbered pages were the foundation which the user annotated with name & date inscriptions in indelible ink. While not infallible, the notebooks were good enough for many important evidentiary functions.

Blockchain technology can bring this historical practice into the digital age. In particular, blockchain can be used to allow for work to be conducted in private yet be revealed, either by choice or circumstance, at a future date.

There are three variations on the use case:

  1. Bob is doing research that may eventually be presented publicly. When it is presented publically there is a need to have historic evidence of all the steps and data used. This is today done with a tamper-evident notebook. The authors of these notebooks are also careful to include date/time as they progressively record their work. In this way an inspection of the notebook can determine that it is whole, not modified, and thus a trust of the contents, when, and by whom.

  1. Prior to 2013, the US Patent and Trademark Office (USPTO) used First-To-Invent to determine priority. While the tamper-evident notebook was essential in that model, it is still valuable supporting evidence even after the switch to First-To-File. In particular, intellectual property disputes benefit from tamper-evident records.

  1. Publicly funded research (e.g. NIH, NSF, DARPA) increasingly mandate the release of underlying data at a future date. There is also a trend on the part of regulatory bodies for full data access, especially in light of concerns over negative results from clinical trials not being reported.

Narrative

The following are the various steps in the overall process.
  • As entries are added to an Evidence Notebook
    • The evidence is recorded in a private notebook, and an Author Signature is submitted to a purpose specific blockchain.
    • The Author may choose to also archive the evidence onto the blockchain.
    • Members of the community, as part of their support of that community, will counter-sign these Author Signature blocks
  • At some time in the future when the Evidence Notebook needs to be disclosed, the Author will declare to the community their identity
  • In support of a disclosure, any member of the community with access to the Evidence Notebook may validate the notebook.

Use-Case Keeping Records

Bob at some periodic point, or based on some procedural point, submits the new Evidence Notebook pages. This is done using a Digital Signature across the new evidence pages, creating an Author Signature. This Author Signature is then placed onto the Evidence Notebook Blockchain, signed by an identity in the control of Bob. This Author Signature does not expose the content of the evidence notebook, but can be used by someone, like Edna, who has access to the Evidence Notebook to prove that the pages submitted have not changed.

  • ? Is there a need to define the Author Signature other than to say it is an XML-Signature format, with signature from the blockchain rather than from PKI?   Advantage the blockchain gives is the identities, algorithm choice, and public ledger.

Use-Case Escrow of Notebook

Bob can optionally put onto the blockchain the updated evidence notebook pages or any evidence (e.g. data) in encrypted form, with a smart-contract holding the key in escrow until one or more terms come true to release the content. The smart-contract can assure that the keys are appropriately disclosed upon trigger events such as time-period, inactivity by Bob, or other typical contract  terms. This escrow also preserves the content across the blockchain redundancy.

  • ? Should the encrypted notebook pages be also cross-signed by the community? The signature would be of the encrypted blob, which would be proof that the encrypted blob appeared on the blockchain at that time.

There is no way to confirm that Bob has placed complete evidence into this encrypted evidence package without also having access to the evidence. Thus there still is the risk that Bob has done an incomplete job of preserving evidence.

Support Use-Case Counter-Signature

Peers from the community will counter-sign these Author Signatures. This blockchain signature by peers simply indicates that the Author Signature block was observed on the Evidence Notebook BLockchain at the stated time. Through multiple counter-signatures by peers, trust in the Author Signature veracity is confirmed.

Automated timestamp peers could also be used, that do nothing but apply a verifiable timestamp signature across any new Author Signatures. These are indistinguishable from Peers, except that Peer identities would also be submitting their own Author Signatures, expecting peer counter-signatures.

Peers are compelled to counter-sign as an act of community. Through these peer identities counter-signing Author Signatures, these peer identities gain more of their own peers to counter-sign any Author Signatures that identity might post. (You wash my back, I’ll wash yours). Thus, a new identity on the blockchain that has not yet counter-signed other’s Author Signatures would not find peers willing to sign that new identity’s Author Signatures.

Use-Case Public Knowledge

The system to this point does not require identities to be known. Neither Bob nor the Peer identities need be publically known. They are simply identities in the Evidence Notebook Blockchain. An identity owner is free to explicitly make their identity known.

Bob needs to make public claims backed by Evidence Notebook proven through Author Signatures by a specific blockchain identity or identities. That is what Bob needs to make proof public that Bob is the holder of the private key associated with one or more identities. Thus binding Bob’s identity with all historic uses of that identity.

Once Bob makes identities public knowledge, others can monitor new Author Signatures created by that identity. This may be seen as exposing activity, so might cause identities that have been made public to not be used for new Author Signatures. The public knowledge of an identity may be seen as beneficial, so the identity may be made public early.

Use-Case Verifying Records

Edna needs to confirm an Evidence Notebook content. Edna has been given access to the Evidence Notebook content. Edna knows the Evidence Notebook Blockchain Identity that is claiming to have made Author Signatures corroborating the specific pages from the Evidence Notebook. The Evidence Notebook may be in any electronic form, as long as the Digital Signature process is repeatable. This is often use of XML-Signature mechanism.

Edna verifies Author Signatures of each submission (page). Edna verifies counter-signatures to gain assurances that the Author Signature has not been tampered with, and occurred during the time indicated.

Edna may choose to discount specific identities that have been determined to be fraudulent, or where the control of that identity private key has been compromised. Edna may choose to discount identities that have not yet made themselves public, holding public identities higher. Noting that the movement of an identity from anonymous to public has value to the community as a whole.

Actors

(brought in whole list from here. Figured we should re-use actors if they fit.)

Actor
Role in the use case
#Bob
The person or entity that submits Author Signatures. They are assumed to be an investigator or worker in a research team.
#Edna
An authenticated and authorized individual that has been granted access to the Evidence Notebook. This may be a staff researcher for the Study Sponsor doing cross-study correlations, or an external researcher with a different study question that can be answered with previously collected data.
#Paul
A peer on the blockchain. The identity may be known or not known.
#Mal
Generic bad actor
Research Sponsor
The organisation that receives research data. These individuals or systems need access to the evidence. They may receive this evidence directly, or through the Escrow Evidence. For the purpose of diagrams and data flows, any member of the study team will be represented as "Dan"
Research Team
The individuals and systems who are performing some research or other activity for which an Evidence Notebook is necessary. Bob is a member of the research team. For the purpose of diagrams and data flows, any member of the research team will be represented as "Bob"
Peers
The individuals and systems who counter-sign Author Signatures to help provide veracity. It is expected that peers will not be part of the same research team as Bob.

Prerequisites / Assumptions

  • Bob needs to keep the research confidential until some future time.
  • The format of the notebook need not be constrained, as long as digital signature can be validated once the notebook is made public.
    • Presume use of XML-Signature schema can mediate this
    • If Evidence data is disclosed it must be properly handled or de-identified
  • There is no need to publish the content of the notebook on the blockchain.
    • There is an option for encrypted notebook on the blockchain, and use of smart-contracts to unlock as appropriate
  • Bob may have many notebooks, or may have many research projects interleaved within one notebook. This similar to paper notebooks today.
  • Bob may need to hide his current activities, meaning new activity can’t be associated with Bob

Use Case Diagrams


Use Case steps

  1. New Author Signature
    1. Bob updates his evidence notebook
    2. Bob submits a Author Signature block to the blockchain
    3. Bob optionally submits Evidence blobs to the blockchain
    4. Paul notices a new Author Signature block
    5. Paul counter-signs the Author Signature block
  2. Evidence Notebook validation
    1. Edna is asked to confirm an Evidence Notebook
    2. Edna is given access to the Evidence Notebook (may not be public disclosure)
    3. Edna validates signatures from the blockchain
    4. Edna validates counter-signatures from the blockchain
    5. Edna extracts timestamps from set of signatures
    6. Edna may validate Public Signatures as necessary
  3. Evidence disclosed
    1. Smart-Contract triggers
    2. Smart-Contract may include notification mechanisms to Dan
    3. Dan receives Evidence and decryption keys given trigger on Smart-Contract

Sequence Diagrams

(drafting, not yet done)

End State

The use case ends when Bob stops submitting Author Signatures under a given identity. There is no expectation that identities must be publically unknown, or can’t be used once publically known.

Success

  • Author Signatures are validated
  • Modified Author Signatures are detected as not valid
  • Participation sufficient to achieve (n) counter-signatures
  • Funding by organizations relying on output (research, clinical trials, etc)

Failure

  • Participants collusion to revise history
  • Is insufficient number of peers, and therefore insufficient number of prompt counter-signatures, a distinct failure mode?

References


Champion / Stakeholder

John Moehrke (self)
Scott Bolte (Niss Consulting)

Related Material


Common Accord: CommonAccord is an initiative to create global codes of legal transacting by codifying and automating legal documents, including contracts, permits, organisational documents, and consents. We anticipate that there will be codes for each jurisdiction, in each language. For international dealings and coordination, there will be at least one "global" code. Center for Collaborative Law

IP Handbook - “Inventors and Inventions” - Chapter 8: “How o Start-and Keep-a Laboratory Notebook: Policy and Practical Guidelines   http://www.iphandbook.org/handbook/ch08/p02/

MIT - Instructions for Using Your Laboratory Notebook http://web.mit.edu/me-ugoffice/communication/labnotebooks.pdf May, 2007

NIH - “Keeping a Lab Notebook” - Presentation by Philip Ryan, https://www.training.nih.gov/assets/Lab_Notebook_508_(new).pdf

FDA - Pharmaceutical Quality Control Labs - http://www.fda.gov/ICECI/Inspections/InspectionGuides/ucm074918.htm

Cornell - LabArchives - an electronic lab notebook - http://collabhelp.cit.cornell.edu/lab-archives/

Howard Kanare - Writing the Laboratory Notebook, American Chemical Society Publications, 1985,  ISBN 978-0841209336

Astroblocks - Lab Journal on Blockchain, experimental use of bitcoin chain, April, 2015, http://www.newsbtc.com/2015/04/11/astroblocks-lab-journal-on-blockchain/

Saturday, August 27, 2016

Privacy Constraints in Controlling Big-Data Feeding Frenzy

This article covers the constraints often placed on an approved use of healthcare data. These are the conditions, restrictions, obligations, or handling caveats.

When a Patient allows use of their data, there is almost always restrictions. Some restrictions are supported in access control rules. That which I have already covered in Vectors through Consent to Control Big-Data Feeding frenzy. I am not going to re-describe "Vectors". The Vectors are used in rules to determine if an access is allowed or denied.

Some of those Vectors are similar to constraints, such as the discussion about "Treatment", "Payment", or "Operations. That I covered in Consent Basis in Controlling Big-Data Feeding frenzy. An important message from that specific example is "Purpose Of Use". This is both a "Vector", and a "Constraint". That is a rule can be based upon a user requesting, where the request indicates that the user is asserting that they will only use the data for a specific Purpose Of Use (e.g. "Treatment"). In this case the "Purpose Of Use" is satisfied at the "Vector" stage.

Purpose Of Use can also be a "Constraint", in that data might get released with a constraint attached indicating a set of Purpose Of Use that are allowable.  This might be done when the user indicates too many Purpose Of Use, where the Privacy Consent only allows a subset. Or it might be used when the User request context didn't have a clear Purpose Of Use declared.

A Constraint, in some technology is called an Obligation, in other technology it is just part of an Authorization Decision. What I am focused on here is some constraint that goes along with the data that will further restrict use or cause specific action.

Some Constraints are not explicitly said in the technology layer, but are part of Policy that enabled communication. Such is the case with a "Data Use" agreement. Here is where Purpose Of Use is seen again, often a communications "Data Use" agreement authorizes only specific kinds of uses. Some Health Information Exchanges have a restriction on"Treatment", Some Health Insurance Exchanges have a restriction on "Payment". Some Research networks have a restriction on "Research".

Up to now I have mostly talked about Purpose Of Use; which is relatively easy to understand and enforce. The following are more specific constraints. These too might be in the Data Use policy, might be represented in Vectors, or might be communicated with the data.

The following is some of the ideas in the space of constraints. Many of these have specific Obligation, or PurposeOfUse codes.


  • Purpose Of Use
    • Treatment
    • EmergencyTreatment
    • Payment
    • Operations
    • Resarch
    • PublicHealth
    • Marketing
    • Donation
  • Access
    • no access beyone given  user
  • Persistence
    • do not persist -- delete after use
    • do not print
    • persist only in encrypted form
  • De-Identification
    • declassify
    • mask
    • redact
    • minor
  • Auditing
    • audit rail
    • notification of subject on use
  • Future Consent
    • re-use requires new consent
    • restrict to specific users

Conclusion

It is very unusual for a Privacy Consent to allow access without Constraints. Most of the time the constraint is built into the Policy that enabled communications, so it doesn't need to be said in the communication. Much of the time the constraint can be handled as part of the Access Control decision. Sometimes the constraint needs to be communicated along with the data, often referred to as an Obligation. This is only done when there are assurances that the residual constraint, Obligation, will be enforced..

Other articles on Privacy Controls and Privacy Enforcement

Friday, August 26, 2016

Consent Basis in Controlling Big-Data Feeding frenzy

In the last article I wrote about all the Vectors through the healthcare data access control space that are commonly needed by Patient Privacy Consent Authorizations. In this article I will describe the residual policy rules and Obligations.

When a Patient says YES to authorize access to their data, they are saying it within some context. This authorization comes with metaphoric strings.

Overall Policy context

A Consent Policy is a multi-layered thing. Let me illuminate this by looking at a simple and most common Privacy Consent in healthcare is:

  • The Patient says YES to authorize use of their data for Treatment, Payment, and normal hospital Operations.

One might think that this is a very simple Consent. Simply "YES". Others might notice that there are some restrictions to "Treatment/Payment/Operations". Both are very important attributes of the consent, and would be seen clearly in the consent. 

The Consent that would be on file will likely just say these simple truths. You all have seen  Consent form, they are not very all encompassing.

What is implied is
  • This consent is only for the one organization. Likely implied by the author of the consent.
  • This consent has a start date, of today. 
  • This consent names the patient
  • This consent names the purpose-of-use of Treatment, Payment, and normal hospital Operations
What is unclear is
  • This consent doesn't appear to have an end date. 
    • So we need to look into the Organizations policies to see what their data retention policy is. Do they retain beyond receiving payment for services? Do they retain until death? Can I ask that they discard?
    • What control is there if the Organization is merged with another organization? Or goes out of business?
  • This consent relies on an agreed definition of "Treatment"
    • Does treatment mean all at the Organization can access the data regardless of treatment relationship?
    • Is there a formal treatment relationship system at this Organization?
    • Who is allowed to declare they are treating?
    • What actions are considered treatment, vs payment, vs operations?
    • One can imaging Treatment is restricted to licensed clinicians; but who is checking that?
    • Are any third parties used for any Treatment actions?
    • Are dietitians involved as part of Treatment, or Operations?
  • This consent relies on agreement of definition of "Payment"
    • Can I pay with cash and thus not expose this episode to any insurance?
    • Who are the people involved in Payment?
    • Are these accesses part of the access report?
    • Are third parties used for any of these Payment activities?
    • is involved?
  • This consent relies on agreement of definition of "Operations"
    • What is operations?
    • Who is authorized to do operations?
    • Who authorizes those that are authorized?
    • Are these operations actions also included in an audit?
    • Does this include government reporting?
    • Is there any way I can control what operations is?
    • Are third parties used for any of these Operations?
  • This consent doesn't say anything about things that are not mentioned. Does this mean that these other things are forbidden?
    • Often there is a statement hidden somewhere that indicates that there are sometimes when Marketing may happen. Often this is considered part of normal Operations
    • Often the organization is under government mandate to participate in quality reporting, immunization reporting, drug-abuse reporting, physical-abuse reporting, etc.
    • Often the organization is required to assist with law enforcement. Does this require a court order? 
    • Often the organization has a clinical-research function. Are the data used in clinical research? Are the data de-identified? If de-identified, what assurances that the de-identification is sufficient? What remedy is available if the de-identification is not sufficient?
    • Are third parties used for these unsaid things?
Further away and 
  • How do I get an accounting of access?
  • How do I dispute that someone got access that should not have?
  • How do I request a correction?
  • If I terminate the Consent, then what is still allowed to be done with my data?
  • What remedy is available?
Within HIPAA there is a requirement that the Notice of Privacy Practices be posted. Although HIPAA is very a minimalist regulation and specific to the USA, similar practices are found elsewhere in the world. Some of the above questions might be answered by that document. However I am sure some of the above is not stated in that document.

An important point is that the details needed are not found in any Regulation, they are specific to the Organization. The Organization must look at regulations and their goals and come up with their specific Policy. This concept of Layers of Policy was first introduced in my Healthcare Privacy & Security Bloginar, based on the IHE presentation.

This preparation is also the first step in my discussion on the overall Consent Process. Shown in this infographic.

So a Consent record will indicate who the Patient is, what the start date is, what organization it is with, etc... The Consent record needs to also be very clear what rules apply at that organization. This is what I am referring to as Base Policy. As in the basis of the Consent. That which this specific Patient specific Consent is built upon.

That Base Policy is defined to be a set of definitions and rules intended to meet some Goals and Regulations. Shown in blue in the following figure. That Base Policy informs and controls a bunch of IT Systems including a User Directory, Patient Directory, Role assignment, ec. That Base policy fulfills a set of regulations. So the Base Policy is fulfilling the Organizations responsibility to Regulations (like HIPAA), and to the Goals of the Organization.

Conclusion

The Base Policy of a Consent is just as important as the Consent. The Basis Policy is not the regulations, regulations are the basis of the Base Policy. The Base Policy includes a huge amount of rules and commitments that are specific to that organization. The Consent is the proverbial tip-of-the-iceberg.

Other articles on Privacy Controls and Privacy Enforcement