Member Spotlight: Q&A with Emory Healthcare
14 May 2026
Featuring insights from Nabile Safdar, Chief AI Officer at Emory Healthcare
Tell us about Emory Healthcare and your role there.
I am the Chief AI Officer at Emory Healthcare – the largest academically based, clinically integrated network in Georgia, with more than 3,800 physicians concentrating in 70 different subspecialties at over 580 provider locations.
In Atlanta, Emory Healthcare has been leading in discovery, particularly in its ability to take AI all the way from basic research to deployment. We leverage strong partnerships with the university and have invested heavily in hiring faculty doing AI work across disciplines – from biomedical informatics and engineering to business, physics, math, and computer science. That allows us to develop tools, take them to proof of concept or MVPs, and even deploy them. We also partner with a wide range of vendors to bring solutions to our providers and patients.
What role do you see AI playing across large academic health systems? Where is the greatest opportunity for impact?
Health systems, especially academic medical centers, are under a lot of pressure. Burnout is at all-time highs across clinicians and staff. Financial pressures are high, and volumes are increasing. There are a few bright spots where AI is already showing promise:
Imaging (e.g., scans, X-rays)
Ambient AI scribes, which are quickly becoming table stakes
AI embedded in the EHR
Revenue cycle optimization reducing friction between providers and payers
Population health, like voice agents supporting medication adherence or chronic disease management
There are really two tracks here. One is optimizing what already exists but hasn’t been fully implemented well – that alone is a decade of work. The other is exploring entirely new possibilities that we haven’t fully imagined yet. Right now, much of what we’re doing is optimizing processes that require manual effort – often work humans don’t prefer to do. Over time, we’ll shift toward optimizing processes for AI agents, allowing humans to focus on more meaningful work at the top of their license.
How is AI changing roles across the organization? What are the biggest challenges with upskilling and change management?
Even if a technology is mature and delivers strong outcomes, it will fail if people don’t know how to use it or don’t understand its limitations. So upskilling is critical. There’s basic compliance training like not putting patient data into chatbots, but that’s just the baseline. We also need deeper training on things like prompting, detecting hallucinations, and knowing when to escalate issues.
In terms of roles, AI both reduces and creates work. For example, AI can identify care gaps much faster than humans, but then you need teams to follow up: outreach, eligibility checks, oversight. Similarly, chart abstraction tools can increase efficiency, allowing staff to handle more volume and focus on higher-value tasks.
We’re also seeing opportunities in call centers and revenue cycle. In clinical care, the big question is how much automation, like triage or prescription renewals, society is ready to accept. Some pilots are underway, but there’s still a lot to learn about tolerance, ROI, and outcomes.
With so many opportunities, how do you decide what to pursue, and what not to?
Every request feels important, and many are. The challenge is balancing capacity between bottom-up requests and top-down strategic priorities. We try not to operate at either extreme. Some capacity is reserved for system-level priorities, while some is dedicated to requests from across the organization.
Projects aligned with system goals – financial, quality, patient care, wellness – tend to get prioritized, especially if they have measurable impact. That impact doesn’t have to be financial; it could be time saved or improved outcomes.
For smaller, niche requests, we’re honest about limitations. Instead of building everything centrally, we often enable teams to build solutions themselves using low-code or no-code tools. With some guidance, many teams can create what they need, and that’s been a very effective way to manage capacity.
How are you measuring success across AI initiatives?
It depends on the use case. I think it’s a mistake to judge every AI initiative purely on hard ROI.
We take a portfolio approach. Some initiatives will be clear financial winners, others won’t—but the overall portfolio should deliver value.
Metrics vary by use case. For clinical AI, such as imaging, the focus is on accuracy, turnaround time, and impact on patient outcomes. Operational tools are typically measured by cost savings and efficiency gains. For ambient AI scribes, key metrics include provider satisfaction, time saved, and patient volume. Population health tools are evaluated based on improvements in specific health metrics.
In some cases, even a modest improvement like catching a critical finding earlier can be a major success. The key is matching the metric to the purpose and evaluating impact holistically.
How are you approaching AI governance at Emory?
We think of governance as a tiered model, not a single centralized process. At one level, domain experts like clinical teams, revenue cycle, and academic units are empowered to make decisions about AI in their areas. They know their needs and tools best.
At the top level, there’s corporate governance, which defines the organization’s posture toward AI: risk tolerance, investment level, and strategic direction.
In the middle is where most governance happens. That includes legal, compliance, AI and data leaders, procurement, and clinical leadership. High-risk or novel use cases, like those involving sensitive data or new vendors, are escalated to an enterprise-wide governance committee.
The goal is to push decisions to domain experts when appropriate, while ensuring clear pathways for escalation and alignment with overall strategy.
Why did Emory Healthcare join CHAI?
We joined because CHAI brings together like-minded organizations facing similar challenges. It’s an opportunity to learn from each other. The resources CHAI provides – like model registries and model cards – are valuable, and we wanted to both use and contribute to them.
We also had people in our organization who wanted to be part of the broader conversation. It’s been a great experience so far, and we’re looking forward to getting more engaged over time.

