The guide is designed for health IT developers, clinicians and institutions that use AI (including generative AI or large language models) to generate or process health data. It provides a common format so downstream systems and human users can see what data came from AI — when, how, and by which algorithm. This helps them judge whether AI-derived data are reliable, appropriate, or need further review.
Key features include:
- Tags or flags on FHIR resources (or individual data elements) to mark AI involvement.
- Metadata about the AI tool: model name and version, timestamps, confidence or uncertainty scores.
- Documentation of human oversight (for example, whether a clinician reviewed or modified AI outputs).
- Traceability: which inputs (e.g., clinical note, image, lab result) were fed to the AI, and how outputs were used to produce or update health data.
For stakeholders — such as patients, clinicians, and health-system administrators — the main benefit is transparency. Users can tell whether data was AI-generated or human-authored, which supports trust, safety, and informed use of AI in care.
And when the AI model or prompt is found to produce unsafe recommendations, then this transparency indications can be used to find potential problems that can then be reexamined.
AI will be used, and attribution to that use will help us deal with the data in the future.
No comments:
Post a Comment