Blogs

CHAI Model Card Utilized in HL7 AI Transparency on FHIR Implementation Guide
Share article

Get in touch

For all media enquiries, please get in touch via chai@12080group.com

CHAI Model Card Utilized in HL7 AI Transparency on FHIR Implementation Guide

31 January 2026

Artificial intelligence holds immense promise for improving healthcare—from enhancing clinical decision-making to streamlining administrative workflows. With that promise comes the critical need for transparency: ensuring clinicians, patients, and systems understand when AI has influenced health data, how those insights were generated, and which models were involved. Standards for transparent AI are essential for building trust, accountability, and safe adoption in health care.

That’s why we’re excited to share that CHAI has played an active role in the HL7 AI Transparency on FHIR Implementation Guide—a newly balloted standard that incorporates the CHAI Model Card to provide a framework for representing AI usage with FHIR, advancing the standard for transparent, interoperable documentation of AI models in healthcare. This guide enables downstream systems and users to determine if a FHIR resource was influenced by AI and trace key details about those interactions.  

By embedding the CHAI Model Card into this HL7 specification, implementers now have a standardized way to express important model metadata (like purpose, version, performance characteristics, and intended use) directly within AI transparency artifacts. This promotes better understanding, governance, and safe use of AI throughout clinical and operational workflows.

Why this matters:

  • Clarity on AI’s role in generating or modifying health data, enabling informed review and decision-making. 

  • Standardized model documentation that supports responsible deployment and auditability.

  • Interoperability across health IT systems, powering consistent interpretation of AI contributions across care settings.

AI Transparency Track, Connectathon Highlights:

During the January HL7 Connectathon, the AI Transparency Track focused on advancing the maturity and clarity of the AI Transparency on FHIR IG through hands-on collaboration and implementation testing. We set out to:

  • Review and collect feedback on the AI Transparency on FHIR IG currently in ballot

  • Develop concrete implementation examples

  • Test those examples across different approaches and implementations

From this review, notable achievements include:

  • Robust discussion of the core use cases the IG is intended to support

  • Identification of several targeted improvements, primarily focused on clarifying profiled requirements and expanding the detail and realism of the use cases

  • Development, review, and refinement of a complete example demonstrating how AI transparency artifacts can be represented in FHIR

What’s next?

  • Community comments have been formally received

  • The HL7 ballot is now closed

  • The project team is actively working to update and strengthen the specification based on Connectathon findings and ballot feedback

We’re proud that CHAI and its community’s expertise contributed to this important step toward responsible AI adoption and interoperability. Thank you to those who contributed, and stay tuned as this IG progresses through the HL7 ballot process!

Logo

Get in touch

chai@12080group.com


Copyright 2026 © Coalition for Health AI, Inc

We use cookies to improve your experience. By your continued use of this site you accept such use.