Clinicians Are Building Their Own AI Tools While CME Still Teaches Literacy
Earlier coverage of ai oversight and its implications for CME providers.
A single Alliance25 conversation points to adaptive avatars as a real CME format, but validation, disclosure, and human review remain the work.
A CPD presenter at Alliance25 described three years of work moving synthetic patients from noninteractive use into real-time conversations that adapt to tone, emotion, and scenario path. This is an emerging, single-source signal, but it gives CME providers a concrete version of AI-enabled personalization to evaluate rather than another abstract AI promise.
In The Alliance Podcast interview, Bretten Gordeau described synthetic humans that can respond to learner tone, body language, emotion, cultural context, and clinical scenario direction. The useful phrase was essentially “choose your own adventure”: not a linear case with a correct answer, but a conversation that changes when the learner changes.
That matters because many CME simulations still evaluate what a clinician knows better than how a clinician navigates resistance, confusion, fear, health literacy, or preference-sensitive decisions. Synthetic humans could make those harder-to-standardize moments repeatable: a resistant diabetes patient, a shared decision-making discussion, a culturally specific communication challenge, or a patient education interaction that can be updated quickly.
This also connects to a longer thread we have tracked in an earlier brief on precision education and learner control: personalization becomes more useful when it is tied to real learner behavior, not just declared preferences. Here, the learner behavior is the interaction itself—what was said, how it was said, where the conversation broke down, and what feedback was delivered immediately afterward.
The caveat is important. This was a conference interview with a CPD professional, and the public evidence does not include independent clinician corroboration or peer-reviewed outcome metrics. The transcript includes examples from opioid REMS and diabetes-related practice, plus claims about learner trust and fast outcomes analysis, but those should be read as implementation experience rather than settled evidence.
For CME providers, the point is not that avatars replace faculty or standardized patients. Gordeau’s own framing was more restrained: “It can replace some tasks, but it can be an integral part of being a great assistant in education for us.” The implementation question is therefore specific: where would a synthetic human produce better rehearsal, feedback, or measurement than the format you already use?
The decision is not whether synthetic humans are impressive. The decision is whether your organization can run a small, instrumented pilot that is transparent to learners, reviewed by humans, and honest about what it can and cannot prove. Waiting for stronger external validation may be reasonable. But if CME teams wait without learning how to evaluate these systems, they may find themselves behind on the governance, measurement, and instructional design questions that will determine whether adaptive simulation becomes useful or merely novel.
Bretten Gordeau details real-time gaps analysis, tone adaptation, emotionally nuanced avatars used for shared decision-making practice (opioid REMS, diabetes) and dual clinician/patient-coach roles, plus three-year deployment history and scalability for remote settings.
Open sourceEarlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of ai oversight and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo