Blogs

Member Spotlight: Q&A with Gerald Kierce, CEO, and Andrew Gamino-Cheong, CTO at Trustible.
Share article

Get in touch

For all media enquiries, please get in touch via chai@12080group.com

Member Spotlight: Q&A with Gerald Kierce, CEO, and Andrew Gamino-Cheong, CTO at Trustible.

18 March 2026

Tell us about Trustible and your roles there.

Gerald Kierce: I’m Gerald Kierce, co-founder and CEO of Trustible. Trustible is an AI governance software platform that works primarily with organizations in regulated environments like healthcare, insurance, and financial services.

Our goal is to help organizations accelerate the adoption of AI while managing the risks inherent in the technology and complying with evolving standards, regulations, and frameworks. Healthcare organizations are asking fundamental questions: What are best practices for adopting AI? How should we think about the unique risks of this technology?

Through our platform, we help automate governance processes. A simple way to think about it is as a kind of “TurboTax for AI governance.” We understand the regulatory environment and AI technology, and we help organizations operationalize governance within their workflows.

Andrew Gamino-Cheong: I’m Andrew, Trustible’s CTO and co-founder. My background is as a machine learning engineer working at the intersection of AI and policy.

At Trustible, I oversee product development. Our platform helps organizations build an inventory of AI systems and can recommend potential risks for different AI use cases and keeps those assessments updated based on the latest research, regulatory guidance, and analysis of AI incidents. That helps teams understand not just the risks they know today, but also the “unknown unknowns” as new issues emerge.

Many organizations are excited about AI but struggle with governance. What gaps do you see when companies move from experimenting with AI to deploying it in real workflows?

GK: One challenge is that early adopters of AI tend to be people who already understand the technology. If you look at an AI innovation team at a large hospital system, they usually know how to operate as the human-in-the-loop for an AI system.

But once those tools move into real workflows, the broader set of users may not have that same understanding. Even if an organization says they want a human in the loop, those humans need to be trained to recognize risks, understand limitations, and know when to override the system. Preventing over-reliance on AI requires education and training.

Another challenge is that small differences in use cases can create dramatically different risks. Recording a call for internal communication is very different from recording a conversation between a patient and a doctor. Organizations need ways to capture that nuance and apply governance appropriately across many different systems.

A common misconception is that governance is mostly about the models themselves. In reality, risk often depends more on the use case than the model.

How is AI governance evolving as AI expands from administrative use cases into clinical workflows and agentic systems?

GK: The challenge is the pace of change. AI systems are evolving rapidly, and organizations may have hundreds or even thousands of use cases. A hospital system might only have a handful of people reviewing them. Traditional governance processes simply don’t scale. That’s where automation combined with human expertise becomes essential. Organizations need governance processes that enable innovation rather than bottleneck it.

AGC: Many organizations have built intake processes for proposed AI uses. But governance also needs to address the full lifecycle of these systems. Organizations that deployed AI a year ago may now need to replace or reassess their models to support new capabilities like agentic systems. But there’s often limited transparency about what changed between model versions, which means organizations need to conduct their own assessments.

Governance also requires ongoing monitoring and feedback. Recognizing issues like hallucinations – and even the different types of hallucinations – requires training and education.

How did Trustible become involved with CHAI?

GK: We got connected to CHAI through our own customers asking us, hey have you looked at CHAI? Have you seen their frameworks? The fact that CHAI was a coalition of members from some of the strongest health systems in the US really gave us confidence that the work they're doing is developed by the practitioners that are actually doing this work day in and day out.

We fundamentally understand AI, but we can’t be experts in every industry. That’s where partnerships with organizations like CHAI become important. You can think of CHAI as developing the research, frameworks, and best practices. Our role is helping organizations operationalize and implement those insights through technology.

What advice would you give health systems and health tech companies implementing responsible AI governance?

GK: First, don’t view governance as a bottleneck. Instead, view it as an enabler.

When organizations implement structured AI governance, they often dramatically increase the number of use cases they’re able to deploy. Governance helps move ideas into trusted, real-world deployments more efficiently.

Second, clearly define what “trust” means for your organization. That definition may differ depending on whether you’re a hospital system, insurer, or medical device company.

AI is still early in its maturity compared with many areas of medicine. In some cases, the science around risk measurement is still developing. For example, understanding how much hallucination is acceptable in a medical context.

Organizations also need to think as much about the intended benefits of AI as they do about the risks. In bioethics, decisions are often made by weighing benefits against harms. The same principle applies here. In some cases, the marginal risk of AI may actually be lower than the human alternative.

Is there a part of the governance process that most helps organizations unlock adoption?

AGC: Most governance teams already have protections in place for a range of risks, particularly around privacy or security. But, they don’t always recognize them as formal mitigations. This disconnect results in a more lengthy process for intake forms, slowing down adoption.

When we help customers map those protections, they often realize most of the risks they were worried about have already been addressed– instead of dealing with twenty perceived risks, they might only need to focus on two or three. This realization unlocks efficiency and scale – that clarity allows them to move faster and repeat those patterns across multiple use cases.

GK: A key factor for organizations is the importance of AI literacy. AI governance is a team sport, but expertise across organizations is uneven. Many organizations create governance committees with representatives from multiple teams, which helps build shared understanding. When people across the organization understand the technology, the path to operationalizing responsible governance becomes unlocked.

Logo

Get in touch

chai@12080group.com


Copyright 2026 © Coalition for Health AI, Inc

We use cookies to improve your experience. By your continued use of this site you accept such use.