From Milestones to Quintuple Aim: CME Must Now Prove Patient Outcomes
Earlier coverage of learning design and its implications for CME providers.
Educator conversations this week challenged two CME assumptions: social reach equals knowledge translation, and simulation efficacy is enough to drive participation.
Educators challenged two familiar shortcuts in CME: counting social reach as knowledge translation and treating simulation effectiveness as a reason clinicians will attend. The evidence this week comes from educator and CPD conversations rather than broad usage analytics, but both signals point to the same operating problem: learning does not happen just because a channel is easy or a format is evidence-based.
Health professions educators described a deliberate retreat from X/Twitter: fewer logins, poorer content visibility, more irrelevant material, and less appetite for nuanced discussion. In one educator panel on social media and HPE knowledge translation, former heavy users described using the platform for resource promotion, event publicity, self-learning, and community visibility—but also said the conversation no longer works the way it once did.
The more important point was not platform frustration. It was the question underneath it: did social media ever deliver true knowledge translation, or mostly diffusion? In the related PAPERs Podcast discussion, educators distinguished passive posting from tailored dissemination, knowledge exchange, synthesis, and feedback. Page views, likes, and citations may show that something was seen. They do not show that a clinician understood it, trusted it, discussed it, or changed practice.
For CME providers, this cuts directly into conference amplification, faculty promotion, journal-club extensions, and post-activity nudges. If X was functioning as a cheap distribution layer, that layer now needs an audit. Which audiences are still there? Which ones have moved? Which behaviors are being measured after the click? The implication is to stop treating social as a generic megaphone and start designing channel-specific follow-up loops: targeted messages, clearer source links, discussion prompts, and outcome measures that extend beyond impressions.
A separate CPD conversation raised a parallel problem for simulation. In a JCEHP companion podcast on simulation as a CPD strategy, simulation educators and researchers described a gap between the evidence base for simulation and the willingness of practicing clinicians to participate. Undergraduate and postgraduate learners may be easier to bring into simulation; practicing clinicians bring time constraints, autonomy, professional identity, and fear of judgment.
This is an emerging signal from a single academic podcast source, so it should not be overstated as broad clinician consensus. But it is useful because it names a common provider mistake: assuming that “this works” is enough to make clinicians enroll. The discussion emphasized that many studies continue to demonstrate simulation effectiveness while leaving the harder adoption question underdeveloped: why do clinicians not line up for a modality that educators believe is valuable?
The design implication is especially relevant for procedural and high-reliability specialties, but not limited to them. CME teams building simulation need to treat psychological safety, role specificity, scenario realism, debriefing quality, and learner identity as part of the intervention—not as facilitation details added after the agenda is built. We saw a related pattern in an earlier brief on clinicians wanting coaching programs rather than more lectures: the format matters less than whether the learner believes the environment will help them improve without making them feel exposed. The question for CME teams is not only “Is simulation effective?” It is “What would make this particular clinician trust the room enough to participate?”
The common thread this week is that CME teams cannot outsource learning to either the channel or the method. Social media can distribute a message without translating it. Simulation can be educationally strong while still failing to attract the clinicians who need it. The useful audit is simple: where are you assuming that exposure equals uptake, or that evidence equals participation? Those are the places where CME design needs more audience specificity, more feedback, and more attention to trust.
HPE educators describe reduced logins and loss of nuanced discussion, shifting from active KT to passive diffusion or 'carpet bombing.'
Open sourceSame cohort questions whether social media ever achieved true knowledge translation versus simple resource promotion and event publicity.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demoEducators note that studies assume effectiveness a priori; post-experience data show reduced threat perception but initial resistance persists due to autonomy and ego concerns.
Open source