Coalition for Health AI (CHAI) Releases New Best Practice Guides and Testing & Evaluation Frameworks for Four Additional Use Cases
24 December 2025
December 24, 2025 – Today, the Coalition for Health AI (CHAI) released a new series of best practice guides (BPGs) and testing and evaluation (T&E) frameworks for four additional use cases: General Health Advice Chatbot, Prior Authorization, EHR Information Retrieval, and Clinical Decision Support. The guides and frameworks, developed through CHAI’s collaborative work group engagement, will help healthcare organizations safely evaluate, deploy, and scale responsible AI. The resources will be living documents, and are accessible and freely available to the public.
CHAI’s BPGs and T&E frameworks reflect an extensive, consensus-driven development process made possible by the coalition’s use case-specific work groups. More than 100 industry experts, including representation from clinicians, health systems, startups, academia, industry, advocacy, policy, and more contributed to their design. Through collaborative and consistent work group meetings, CHAI was able to gather consensus across participants by analyzing real-world challenges, aligning evaluation methods, and dissecting processes that health systems can operationalize to safely adopt AI.
“CHAI remains committed to our mission in building the broadest possible consensus across the healthcare ecosystem to help ensure AI is trusted and safe,” said Dr. Brian Anderson, CEO of CHAI. “Work groups are the crux of this mission, as they allow us to convene experts, gather feedback, and create dialogue to inform our BPGs and T&E frameworks. I’m thrilled to see the product of this collaboration with these new resources released today.”
The BPGs surface challenges, insights, discussions, and current consensus-defined practices for developers and implementers, giving organizations a look into what is happening in the rapidly evolving field of health AI. The T&E frameworks provide options for how developers and implementers can operationalize evaluation and monitoring of responsible AI principles as they relate to specific use cases.
“We recognize that new evidence and experiences will always arise,” said Merage Ghane, PhD, Director of Responsible AI at CHAI. “As such, CHAI has made the feedback process for these documents a continuous one. We wanted to ensure that anyone could submit an issue form with specific change recommendations for the BPGs and T&E frameworks. These suggestions then contribute to our new version updates, along with a summary of changes and feedback responses – all keeping the process entirely collaborative.”
The following use case BPGs and T&E frameworks are now available on the CHAI website for anyone to access. To explore the existing responsible AI content and upcoming use cases, visit: https://www.chai.org/workgroup/use-case. T&E frameworks can additionally be accessed through the CHAI Responsible AI GitHub Page.
General Health Advice Chatbot: The general health advice chatbot is designed to provide users with reliable, non-clinical health information and personalized guidance, promoting informed decision-making and enhancing health literacy.
Work group leads from: Zelis, WellnessWits, National Council for Mental Wellbeing, Florida State College of Nursing, Affineon, IQVIA
Prior Authorization: The criteria matching component of an AI-supported prior authorization (PA) system automates the process of assessing whether a healthcare service, procedure, or medication meets the payer’s medical necessity guidelines.
Work group leads from: BCBS Minnesota, CVS Health, Encore Health, Humata Health, Lyric.ai, MCG Health, Solventum, UnitedHealth Group, Penguin Ai
EHR Information Retrieval: This use case focuses on AI systems that help clinicians quickly find relevant information within a patient’s electronic health record (EHR). These tools use natural language processing or search algorithms to extract critical clinical data (e.g., prior diagnoses, medications, lab results), reducing time spent navigating complex records and enabling faster decision-making.
Work group leads from: Infinitus Systems, MCG Health, Abstractive Health, UC San Diego, Regard, UC San Francisco, Columbia, Mass General Brigham, Intel
Clinical Decision Support: Clinical Decision Support (CDS) tools use AI to assist healthcare professionals in making timely, evidence-based decisions. While CDS encompasses a wide range of applications—from risk prediction and diagnostic assistance to treatment recommendations and guideline adherence—this particular use case focuses on a CDS solution powered by a large language model (LLM) using a retrieval-augmented generation (RAG) approach.
Work group leads from: Microsoft, MedStar Health, Elsevier, EBSCO/DynaAI, Darena Solutions/Prompt Opinion, Wolters Kluwer, Mayo Clinic

