Blogs

CHAI Releases New Patient Survey Report on Health AI & Transparency
Share article

Get in touch

For all media enquiries, please get in touch via chai@12080group.com

CHAI Releases New Patient Survey Report on Health AI & Transparency

28 January 2026

Artificial intelligence is now firmly embedded across the healthcare system – from clinical decision support and diagnostics to administrative workflows and patient engagement. Yet as adoption accelerates, public trust, governance workflows, and accountability structures are not keeping pace.

To quantify this disconnect, the Coalition for Health AI (CHAI) conducted new national research examining public trust in health AI. The study was funded by the California Health Care Foundation, conducted by NORC at the University of Chicago using its AmeriSpeak® probability-based panel, and informed by CHAI’s Policy Workgroup alongside input from more than 150 clinicians, patients, and technology vendors.

This research builds on CHAI’s 2025 analysis of how U.S. states are converging and diverging on health AI requirements. Together, these efforts aim to ground policy and governance decisions in real-world evidence about how providers and patients experience and evaluate AI in healthcare. 

What We Found: A Widening Trust Gap

The data reveals a striking disconnect between the prevalence of AI use and public confidence:

  • 75% of respondents report using AI, yet only 13% say they feel very comfortable with it

  • 51% report that AI makes them trust healthcare less, while just 12% say it increases trust

  • 93% report at least one concern about the use of AI in healthcare

  • More than 80% indicate their trust would increase if clear accountability measures were in place

While AI adoption is already widespread, confidence remains fragile and highly contingent on how AI is governed. 

What the Public Is Actually Concerned About

Several themes emerging from the data challenge common assumptions about patient concerns related to health AI, including:

  • Governance and Accountability Matter More Than the Technology Itself: Concerns center less on whether AI exists in health, and more on who is accountable when it is used, how decisions are monitored, and what protections are in place for patients. The findings indicate that oversight meaningfully reassures the public, especially for low-trust use cases. Patients express clear discomfort with scenarios in which AI systems operate without meaningful human oversight – an especially salient finding as more autonomous and agentic AI systems begin to enter healthcare workflows.

  • Use, Comfort, and Trust Are Not the Same: While AI use is high, the survey shows only a modest relationship between trust and intentional use and overall comfort. These dimensions vary significantly depending on context, use case, and perceived safeguards, underscoring that familiarity alone does not build trust. Across the data, there is no clear evidence of a group that regularly uses AI in healthcare while simultaneously expressing strong distrust. 

  • Data Commercialization Raises More Alarm Than Bias: Respondents expressed greater concern about the commercialization and sale of health data than about algorithmic bias. Notably, 12% of respondents report never having considered AI bias at all, highlighting a gap between expert discourse and public salience. Interestingly, insurance-related applications remain substantially less trusted, even when clinician review is introduced, with trust remaining at around 28% even with oversight.

  • Disclosure Alone Is Not Enough: While transparency about AI use is broadly expected, the data suggest that disclosure by itself is insufficient to build trust – and in some cases may even reduce trust if not paired with clear explanations of oversight, accountability, and patient protections. 

  • Who Should Be Responsible for Oversight? There is no single institution that respondents overwhelmingly trust to oversee health AI. Instead, participants favor multi-layered governance models, comprised of input and feedback from:

    • Independent nonprofit organizations

    • Health systems and provider organizations

    • Federal regulators

Why Now, and What Comes Next?

As more states accelerate health AI regulation and health systems continue to scale real-world deployment, these survey findings offer an evidence base for policy and governance grounded in public behavior and preferences. 

For CHAI members and the larger community – including policymakers, health systems, developers, researchers, patient advocates, and more – this research demonstrates the urgency of moving beyond technical performance metrics alone.

Building on this research, CHAI will continue to develop practical tools, guidance, and open-source resources to support health systems in AI governance, validation, and post-deployment monitoring. We hope these findings inform leaders responsible for policy, deployment and oversight, and contribute to more durable, trust-centered approaches to AI in healthcare.

Read the full report here(opens in new tab):

Lucy

Logo

Get in touch

chai@12080group.com


Copyright 2026 © Coalition for Health AI, Inc

We use cookies to improve your experience. By your continued use of this site you accept such use.