Pages

Friday, January 27, 2012

HIE using IHE

An Introduction to building a Health Information Exchange using various IHE Profiles. (If you are having trouble getting to the IHE web site, here is an FTP link to the same paper) I was one of the contributors to this white paper, so I like it. I wanted to add more and more detail, but we wanted to keep it short. At 35 pages it is as small as we could get it. The things one needs to think about when building an HIE is quite large. We did not re-write the good work found in the IHE Affinity Domain planning kit. This is still a fantastic resource for building your governance, code-sets, and policies; like seen from Connecticut.

The paper includes discussion of the principles that IHE has considered in their Profile development. Including Governance, Document characteristics, Longitudinal issues, Metadata, Document Relationships, Document Life-cycle, Patient Identity, Discovery, Security, and Privacy. IHE Supports Health Information Exchanges that use Push, Publish and Discovery, and Federated Discovery. Each of these architectures have their own section describing the profile use. Each profile leverages the same document life-cycle, packaging, and metadata model. 

Extended discussion is included on the topics of the architectures behind the XD* profiles. For example the XDS profile that supports a centralized registry model with distributed repositories. This model is shown with a slide from an upcoming webinar on the topic. 

Although XDS is the 'grand-daddy' of the XD* profiles; the other profiles are just as important given different patterns. When a Push model is desired the XDM and XDR profiles are best suited. When there is a need to federate communities is needed, XCA fills the need.

Patient Identity is one of the topics that receives extended discussion, as is Provider Directories, Privacy and Security. Patient Identity is complex in a Health Information Exchange, even in the most simple models.

The white paper covers a high level of Privacy and Security. My webinar covers more detail.

In all cases, the details are not included in the white paper. There is heavy use of the IHE webinars for additional overview, and the profile text for the details.

Update: For those wondering if or where these IHE profiles are used to build HIEs, see http://tinyurl.com/wwxds

As a White Paper, it is expected to be updated based on feedback. So if you have a question that is not answered, please log it as a comment (see below).

------------------------------------------------------------------------------------------------------------ 


IHE IT Infrastructure White Paper Published

The IHE IT Infrastructure Technical Committee published the following white paper on January 24, 2012:

 
·          Health Information Exchange: Enabling Document Sharing Using IHE Profiles
 
The document is available for download at http://www.ihe.net/Technical_Framework/index.cfm. Comments can be submitted at http://www.ihe.net/iti/iticomments.cfm.
 

Distributed Active Backup of Health Record

Lesson learned from large scale natural disasters in the USA from Hurricane Katrina, and in Japan by 2011 Tōhoku earthquake and tsunami; is that a system backup is not sufficient in healthcare. A system backup is one where the data and/or system is backed up to something like backup-tape. It hopefully is stored off-site, specifically in a different region to keep it far enough away from any large scale disaster such as Katrina and Tōhoku. It might be that the backup is done directly into the cloud, that is that rather than using physical tape the backup is streamed to cloud based storage (I do this at home with Carbonite).

But this is not good enough in Healthcare. In healthcare the people directly affected by the large scale natural disaster will need healthcare urgently and long term. The disaster, especially in these large scale cases, destroyed the local healthcare infrastructure (clinics, hospitals, etc). Thus these operational environments are not available to 'restore the backup'. It is possible to provision totally new, yet identical systems elsewhere; then restore the backup onto those systems. But this takes time; and is logistically very difficult to do.

The problem is that the directly affected community needs healthcare right away, and in a way that is sustaining. This is not to focus on the emergency need, as emergency treatment works well in the absence of historical information (although would be enhanced if it was there). But rather to recognize that urgent care, emergent care, and sustaining care are still necessary. Some examples that are not obvious to many people are the need to put treatment plans in place that will need to be continued for months or years; picking up on past long-term treatment plans; re-issuing prescriptions that were in place (urgent to replace dispensed drugs that were destroyed). The threat (Risk assessment) to these healthcare workflows must be considered.

The Patient Centric Health Record needs to be available regardless of this large scale natural disaster. One possibility is to distribute the active health information across regions far enough away from any large scale disaster. Others might use A Health Information Exchange; or whole datacenter replicated on a truck. It is possible that a PHR might also be a solution, but only works if the patient takes the initiative; so it can't really be an organizations or communities solution. So risk assessment is clearly the right approach to produce the best solution for your environment and the likelihood and impact of disasters in your area.

In Japan, they filled the short term need with their distributed prescription system. Fortunately for them they did have all the prescriptions available through this. This clearly allows for re-dispensing, and new prescriptions. What is truly creative is that Japan used this system to re-create the patients likely problems. I find this wonderfully creative; yet tragic that this had to be done. Japan now has rules that require that all patient records must be duplicated at more than one facility. This is done through message routing in real-time.

Healthcare is about providing Care to the Health of the Patient. This is not a local-business problem, but much wider. The community needs are important. Please consider these risks, made so clear by these large scale natural disasters.

Wednesday, January 25, 2012

FW: Privacy and Security Mobile Device Good Practices Project Launched

This just crossed my desk. I think that Mobile Devices do present a Risk that is higher than most people think about. But I don't think that Mobile Devices are special. Risk based approaches are the right answer.

--------------------------------------------------------------------------------------------

From: ONC Health IT [mailto:onchealthit.@service.govdelivery.com]
Sent: Wednesday, January 25, 2012 12:46 PM
To: Moehrke, John (GE Healthcare)
Subject: Privacy and Security Mobile Device Good Practices Project Launched

HealthIT.gov

Privacy and Security Mobile Device Good Practices Project Launched
ONC's Office of the Chief Privacy Officer (OCPO), in working with the HHS Office for Civil Rights (OCR), recently launched a Privacy & Security Mobile Device project.

The project goal is to develop an effective and practical way to bring awareness and understanding to those in the clinical sector to help them better secure and protect health information while using mobile devices (e.g., laptops, tablets, and smartphones). Building on the existing HHS HIPAA Security Rule - Remote Use Guidance, the project is designed to identify privacy and security good practices for mobile devices. Identified good practices and use cases will be communicated in plain, practical, and easy to understand language for health care providers, professionals, and other entities.

HHS will be looking for your input. Stay tuned for a public roundtable this Spring.

For information about other HHS mHealth activities, please visit the mHealth Initiative website: http://www.hhs.gov/open/initiatives/mhealth/index.html.

Saturday, January 21, 2012

Data Segmentation Use Case Ready for Review

This just crossed my desk. Please help review these use-cases. I think we need to constrain the scope, not because the scope isn't appropriate, but rather because it is only with a constrained scope that we will make the next step in progress. This is a multi-step journey, taking too big of a step will result in failure.

A little background, this is a USA effort driven by the government body HHS/ONC and others. The focus is to determine for these use-cases what is necessary and/or how to support. The use-cases are focused on the highly sensitive medical topics like HIV, Drug-Abuse, etc. Keeping them sequestered (segmented) when there is not authorization yet available when they are authorized is the scope. I have written about this more when the project was kicked off.

From: S&I Framework Admin [mailto:admin@siframework.org]
Subject: Data Segmentation Use Case Ready for Review
Sent: Friday, January 20, 2012 8:51 AM 
Hi Work Group Members,
As a reminder the Data Segmentation Use Case document is posted and ready for review. You can find the Data Segmentation Use case attached to the Comment Form page: http://wiki.siframework.org/Data+Segmentation+Use+Case+Comment+Form. Please take a moment to review the document and to provide detailed, actionable comments where applicable. If you prefer to email your comments you can send them to Jamie.Parker@esacinc.com. The consensus process schedule is as follows:
1/23 - Comment Process Closes - (all Comments on the Data Segmentation Use Case document must be submitted by COB 1/23)
1/25 - Final Review of Consensus Comments and Approval of the Use Case for Consensus Voting
1/26 - Consensus Voting Begins
1/30 - Consensus Voting Closes (all consensus votes must be received by COB 1/30)
2/1 - Review of the Consensus Vote

We look forward to your comments as we reach one of the Data Segmentation For Privacy Initiative milestones.
Thank you
Jamie on behalf of the Data Segmentation Support Team

Thursday, January 19, 2012

HIE/HIO Governance, Policies, and Consents


I wrote about Connecticut HIE Policies that were out for public comment. Connecticut is now moving forward to moving real patient data for real patient treatments. This is fantastic, but what is really wonderful for the rest of us is that Connecticut is being very open and transparent. They have published their whole stack of governance, policies, and consents.

This is a really great example of the administrative work that must be done before one can really be evaluating the security and privacy needs of an HIE. These policies were written using many ISO standards and the IHE Affinity Domain planning kit. Please go to the site as they have a beautiful breakdown of the various many policies that are needed. Many people don't believe me when I say that there are many layers of policy.

These are a really good example of how an HIO can take a look at what is out there and pull what they understand while doing what is necessary to get what they need done. For example on ConfidentialityCode, Connecticut was confused by the vocabulary offered by HL7, and thus wrote their own vocabulary. They actually pulled more from ISO 13606, but didn't use that vocabulary either. We were lucky enough to be able to discuss this in detail this summer.  HL7 has revised their documentation and vocabulary so that we can have a vocabulary that could be understood beyond one HIE.

The establishment of policies and procedures are a key component for an effective HIE and sets the boundaries for data sharing between the health information exchange and its participating partners. 

These policies are now posted on the HITE-CT website and are available for public comment. The direct link to the Policies and Procedures page is http://1.usa.gov/hitectpolicies. The policies may also be accessed by going to the DPH website at www.ct.gov/dph under featured links: Health Information Technology Exchange of Connecticut”, then click on “Policies and Procedures” located on the left hand bar menu.
I would love to see more of this. It is always very important to see how a standard is understood or misunderstood so that we can make it better. I have pointed many people toward this site, and everyone has come back to me the next day and thanked me and asked for an introduction to the brains of this. I know her well, so ask and I will gladly introduce you  too.

For a Webinar on this 

Monday, January 16, 2012

Workflow Automation Among Multiple Care-Providing Institutions

IHE IT Technical committee has released the Cross-Enterprise Document Workflow (XDW) profile. This is a key foundational profile that will enable use-case specific workflows to be managed across organizational boundaries. It sets forward a basic workflow 'token' by defining a workflow document which is profiled by combining OASIS WS-HumanTask, and HL7 CDA. The XDW profile does not define any workflows, but rather sets a framework that others will use to support Health Information Exchange (HIE) based workflows like Patient Transfer, Remote Imaging Diagnosis Referral, Prescription workflows, and many Home Care plans. By re-using the XDS infrastructure, XDW leverages the clinical documentation also managed there. XDW provides context and focus to the documentation.

XDW is targeted to facilitate the development of interoperable workflow management applications where workflow-specific customization is minimized. XDW does NOT replace departmental workflows, although it might leverage a departmental workflow as one of the larger steps that are managed by XDW. XDW does NOT replace centrally managed workflows such as those controlled by BPEL, but might leverage a BPEL controlled workflow as one of the larger steps that are managed by XDW. The XDW workflow may refer to an externally defined and possibly not computable workflow definition, but is designed to support evolution to completely enclosed and/or computable workflows.


In the development of the XDW Profile, much attention has been applied to making XDW deployment easy, without the need for dedicated centralized workflow infrastructure. The profile recognizes that the patient is centric to their healthcare workflow, and thus uses the patient centric Health Information Exchange provided by XDS. XDW is foundational both to support care-setting specific workflows, but also to expand with future evolution.

Health Systems in developed countries are receiving much attention, it is now well accepted that interoperable IT systems and EHRs are a critical technology. Much progress has been made in standards-based interoperability in health information exchange (HIE) and early deployments are encouraging. Integrating the Health Enterprise (IHE) profiles such as Cross-Enterprise Document Sharing (XDS) and Cross-Community Access (XCA) are playing a key role in large scale projects such as the European cross-border epSOS and the US NwHIN-Exchange. Many are satisfied that they can now share Clinical Documents (CCD/CDA), Diagnostic Imaging (DICOM) or even unstructured (PDF) content is accepted as foundational.

We now look to manage workflow among the various institutions that are coordinating a patient’s care delivery. These workflows are community wide and involve different tasks at different specialty facilities that are not part of the same organization, to leverage the HIE infrastructure that has been proven as a good longitudinal document exchange. This is the problem that is addressed by the Cross-Enterprise Document Workflow (XDW) profile.

A novel approach to multi-organization workflow management
The Cross-Enterprise Document Workflow (XDW) profile was released by IHE in October 2011 for trial implementation. XDW enables participants in a multi-organizational environment to manage and track the tasks related to patient-centric workflows as they coordinate their activities:
  • It does not rely on a central controller, or a central scheduler, thus it may scale from exchanges, to communities to nations.
  • Decisions are made by the “edge” IT systems supporting the providers, doctors, nurses, etc. Note an edge system can be a departmental workflow automation system.
  • It is flexible to support a wide range of workflow, from the simplest e-Referral, to dynamic and adjustable workflows.
  • It minimizes implementation impact on IT systems that manage workflows within each organization
  • Basic framework today that can be enhanced by specific workflows and advanced as multi-organizational workflow standards emerge and mature
XDW a Framework for multi-organizational workflows
XDW is an interoperability framework operating in a document sharing context (e.g. based on the XDS profile) which supports the management of clinical process. Its a workflow-generic profile which needs to be specialized through specific Workflow Definitions (IHE specified Profiles or project specific) to address specific clinical processes. As a framework, it Increases the consistency across workflows, and enables the easy deployment of interoperable workflow management applications where workflow-specific customization is minimized. As a result, XDW facilitates the integration of multi-organizational workflows with the variety of existing workflow management systems used within the participating organizations.

It federates the workflows managed by each institution in a peer-to-peer approach. This is quite different from the classical intra-hospital approach to support a workflow by exchanging messages (orders, acknowledgments, results, notifications, etc.) between predefined IT systems.

How does XDW Work?
XDW operates around the sharing of a Workflow document for each workflow instance associated with a patient. A workflow document:
  • Tracks the current/past tasks of the workflow and engaged health care entities
  • Tracks the workflow specific tasks status with relationship and references to input/output documents



The execution of each instance of a workflow is driven/enforced by the participating IT systems (XDW actors), while the document sharing infrastructure provides transparent access to any authorized participating system.

The Workflow Document format and structure is specified by XDW and is generic across any specific workflow definition.

What are the workflows supported by XDW ?

The XDW profile is designed to be used in conjunction with any Workflow Definition. It makes the task of defining such workflows quite easy, as it provides a clear and user friendly framework to develop such definitions. They include:

· A human readable definition of a specific healthcare process

· A set of rules and task definition which characterize the process

· The definition of the participants involved in the workflow and their roles

The XDW profile contains an annex that includes the informal definition of a simple e-Referral workflow, sufficient to implement and use XDW. IHE expects that clinical IHE domains will develop reusable Workflow Definitions as IHE Profiles for the most common workflows, but that more specific workflows may be also defined by e-Health projects around the world. IHE anticipates provider organizations and national bodies (HHS/ONC) to define workflows that can be directly automated using XDW.

IHE Patient Care Coordination (PCC), IHE Radiology and IHE Pharmacy have already started to build upon XDW with profiles due by the summer of 2012 such as:

  • XBeR-WD Cross Enterprise Basic e-Referral Workflow Definition Profile
  • XSM Cross Enterprise Screening Mammography Workflow Definition Profile (White Paper)
  • XTHM-WD Cross Enterprise Tele Home Monitoring Workflow Definition Profile
  • XTB-WD Cross Enterprise Tumor Board Workflow Definition Profile
  • CMPD Community Medication Prescription and Dispense

Selected Standards
XDW Supplement introduces a new content profile type - for a workflow management document - based on the following standards:

  • OASIS Human Task for task structure and encoding (part of the BPEL suite of standards)
  • HL7 CDA for provider description
  • HL7 R-MIM for patient and author description

In XDW, no new transactions are introduced. WDW leverages existing ITI IHE Profiles:

  • XDS, PIX, PDQ, DSUB, BPPC, ATNA, XUA, CT
  • No XDS Metadata extension, but specific rules about XDS Metadata content for the registry entry associated to the XDW Workflow Document

Is XDW ready for prime time?

With the trial implementation profile specification available, the first IHE Connectathon testing is scheduled for Bern (Switzerland) in May 2012 with at least an e-Referral workflow. The profile is ready for implementation in products towards the end of 2012, early 2013.

Conclusion
The integration of multi-organizational workflows with the variety of existing workflow management systems used within the participating organizations has been waiting for such a common, workflow-independent approach to interoperability.

XDW provides a platform upon which a wide range of specific workflows can be defined with minimal specification and implementation efforts (e.g., Medical Referrals Workflow, Remote Imaging Diagnosis, Prescriptions Workflow, Home Care Workflow). As it increases the consistency of workflow interoperability, it is targeted to facilitate the development of interoperable workflow management applications where workflow-specific customization is minimized.

Much attention has been applied to making its deployment easy, without the needs dedicated centralized infrastructure beyond a secured sharing of health documents infrastructure provided by an XDS Registry/repository.

Monday, January 2, 2012

ATNA + SYSLOG is good enough

There has been a renewed discussion on the IHE ITI Technical around the topic of syslog and application level acknowledgement. There are calls for Healthcare to go away from SYSLOG and invent their own protocol with an application level acknowledgement. Rob has provided his analysis and proposed one solution, then followed it with more analysis. I simply don't think that the problem is worth fixing: there is a very small likelihood of it happening, and it is detectable. With good design, once the failure has been detected it can be completely recovered from.  This being the months leading up to Connectathon, the topic of design vs interoperability specification comes up quite often. For example the ATNA audit log recording of Query transactions

The concern is that there are cases where one can cause audit events to be lost by killing the network inbetween the audit sender and the Audit Record Repository. If you do this as a link in the middle, then neither side notices until re-transmission timeouts. In this time the sending side may nolonger have the messages to retransmit at the application level. The core concern is for integrity of Patient Privacy reports such as the Accounting of Disclosures.

Analysis
This is the reason why ATNA has all systems involved, recording auditable events. Although one system might have lost an audit event, the other party involved in a transaction will likely have succeeded in recording the event. That is the client of a transaction (e.g., XDS Document Consumer) fails to get their auditable event recorded, but the server of that transaction (e.g., XDS Registry) does get their auditable event recorded. Further, each access to the data once it is inside the receiving system (e.g., XDS Document Consumer) must also be recorded. Among all of these audit records will be sufficient information, even if a few events are lost. This protocol was designed back when SYSLOG was completely UDP based, favoring the model of no-delay, possibly out of order, and no-queues to reliable transport (TCP) now with security (TLS).

The security officer can see that there is a missing audit event, as all transactional events should be double, and can investigate the failure that caused the event. If the failure continues to happen, then they have knowledge to make the system that failed more robust. Like possibly putting an ARR closer to that system (such as the Distributed Accountability diagrammed), possibly inside on loopback with a filter auto-forwarding robustly. Using a standard like SYSLOG allows for using off-the-shelf building blocks.

I will point out that TCP protocol is a reliable transport (I wrote a complete commercial stack back in the 80s for Frontier Technologies Corp - throw stones if you wish). The TCP problem that people are pointing at is totally detectable, but requires that the application wait for the confirmation that the connection was closed gracefully (SO_LINGER, or shutdown).  I am assuming that the observed problems of lost audit events  are due to some implementations that are not doing graceful shutdown of the socket, so they can't notice if the connection closed abnormally. Applications have responsibility too. It is very true if you don't wait for a graceful shutdown to complete normally then you can't know if all the data you have sent has been received, or if you have received all the data the other side sent.

Going deeper
There is one case where the 'wait' can be very long and leave things indeterminate. The case is well documented by one of the leading thought leaders inside the SYSLOG community Rainer Gerhards

The case is where a network failure happens during communications. Normally the Audit Record Repository is only receiving, and thus there is no outbound TCP traffic from the Audit Record Repository to trigger a failure event. To protect these cases, the TCP protocol implementations have added a SO_KEEPALIVE, that would have the TCP on the ARR side sending negative traffic just to get a positive TCP-ACK or reset. So, I would suggest that ARRs should be using SO_KEEPALIVE. The ARR would know all the data that was received and that the connection terminated non-gracefully. So the ARR side is detectable and deterministic.

The sending side would have data in the outbound queue (at least in the documented case), so this data in the outbound queue will be retransmitted until the TCP on the sending side gives up (yes, lots of retransmits later, with dynamic backoff). The sending side can also notice that it's outbound wants to block, and based on application logic (queue+time configurations) presume failure. So, the sending side will know that failure has happened, just not 'when in the data stream'. Yes the sending side is very blind to the TCP outbound queue inside the stack. Thus for full record (which I argue above is not critical) the sending side would need to re-send all un-recorded audit events, which it doesn't know how far back to go. The sending side could also use SO_KEEPALIVE, it would help to detect failure when it happens, which might be while the outbound queue is empty.

Note that both the sending and ARR should really be recording this connection anomaly as an auditable event, thus flagging it for inspection by the security officer.

Detect and Mitigate
If you want to make sure you have all your audit events recorded, you could always gracefully close the SYSLOG connection (ShutdownOutput, SO_LINGER). Open up a new one for new events, while awaiting the graceful close notification on the old connection. This will have additional overhead, and no idea how well Audit Record Repositories would like this. Note more auditable events might be recorded on open and close of the SYSLOG socket.

I could imagine a robust design that has some outbound queue size or inactivity timeout that might be used to cause this confirmation flush shutdown. In this case, the sending side can know exactly all that should be re-sent if a network failure happens, possibly delivering duplicates to the ARR (an easy thing to detect at the ARR). This seems like a high level of logic to handle an event that doesn't happen often, is detectable, and duplicate events protects against. As Rob points out, retransmitting an ATNA audit event will mostly be detected at the Audit Record Repository although Rob suggests we could make the protocol more robust.

Note that we did originally specify the "Reliable SYSLOG" protocol, which does include these application level controls. This protocol was rejected by the IHE developers, and also by the general SYSLOG community. It was considered too complex, and too hard to implement. The SYSLOG community may continue to mature and head back to this more robust approach, but I don't see that happening very fast. The reality is that the problem does exist, but there are other ways to solve the problem without changing the protocol completely.

Updated: Rob has posted an article on their experience with network failures. This is more proof that one needs good design, design that has considered risks (including security risks).

ATNA audit log recording of Query transactions

A common question that comes up when people are implementing ATNA audit logging is what to do when they are recording that a "Query" has happened. This might be the Queries that IHE defines in Profiles like PIX, PDQ, XDS; or it might be a database query of some kind. This topic is not fully covered in the IHE Technical Framework, but is covered better than people recognize.

According to the IHE ITI Technical Framework, Volume 2a, section 3.20 (Record Audit Event), table 3.20.6-1, the "Query Information" event note says
Notes: The general guidance is to log the query event with
the query parameters and not the result of the query. The
result of a query may be very large and is likely to be of
limited value vs. the overhead. The query parameters can be
used effectively to detect bad behavior and the expectation is
that given the query parameters the result could be
regenerated if necessary.
This philosophy is why the PIX, PDQ, and XDS 'query' transactions, such as XDS "Registry Stored Query", security considerations only shows the query (parameters, and outcome indicator) being captured and not the query results. IHE did not duplicate this note in all the places where it shows query like transactions being profiled. A familiarization with section 3.20 is essential, and table 3.20.6-1 is critical as it contains a listing of the other security relevant events that are expected to capture if your "Secure".. "node/application" controls them.

It is important to note that the "Query Information" audit event does include the "EventOutcomeIndicator", which will show that the query succeeded or failed. It is not an indicator of how many results were returned, so a successful query that returned zero results looks just like a successful query that returned 1 billion results.  This means that the ATNA audit event can't be recorded until the results are known, or at least if the query is going to be successful or failure.  Note that it is expected that an Access Control decision to deny a Query (resulting in zero results returned) would also be an auditable event, and thus an Access Control denied event would be recorded that would explain why.

The second sentence in the note is the 'reason' why we want only the Query to be recorded and not the results. The results would seem to be very useful, especially to a Privacy officer. But the results would likely be high-quality PHI; and we want to keep as much PHI out of the ATNA audit log as possible. This is why ATNA asks for identifiers and discourages descriptions. This is simply good design, to limit unnecessary risk. This greater phlosophy

The information that is not recorded can be discovered through other means. The ATNA Audit Record Repository is an abstract actor, it is fully expected that a valuable system/product would take many actors and functionality together. That is a good Audit Service would include an ATNA Audit Record Repository, but would also include a PIX Consumer, PDQ Consumer, PWP Consumer, XDS Consumer, and any other actors/functionality necessary to de-reference identifiers. This act of de-referencing the identifiers would be, Security Relevant and thus auditable. In this way there is built-in watching of the watchers.

Specifically in the case of XDS queries, that the Audit Service can tell simply by the Query parameters that are encoded into the ATNA Audit Message record of the XDS Query; which queries were against a specific patient. This is because all XDS queries are patient specific, and include the patient ID in the query parameters. Yes there are a few exceptions that are recognizable and handled independently.

If the security office or privacy office wants to know what the results of the query were, they can re-execute the query. With specialized (not IHE profiled) tools the query can even be with the same security context. Simplistically one would say that the state of the database has changed since the original query, this might be true; but good database design would also have a change-tracking-log  that would tell you what has changed since the date/time stamp of the original query.

Note that this philosophy is consistent with the other transactions, it just needs to be spelled out somewhere for Queries. For example, on XDS Retrieve Document Set transaction, we don't tell you not to duplicate the bytes of the document retrieved into the ATNA audit message. This seems logical that one doesn't copy the retrieved document into the audit log. It just doesn't feel as logical when the transaction is a query.

All of this is 'systems design', and not necessary to include in the 'Interoperability Profiles'. This is because this systems design knowledge doesn't change the interoperability characteristics. IHE also doesn't include this systems design knowledge because there are likely many ways to design a system. IHE technical committees do consider systems design during profile writing, specifically we make sure that there is at least one way to design a system.

See: