Physicians Call Duplicative Training and Trivia Exams Unsustainable
Earlier coverage of learning design and its implications for CME providers.
An RCT shows personalization raises motivation but not knowledge transfer, while IME grant strategy must now map explicitly to sponsor scientific journeys.
Personalized e-learning raised motivation and perceived relevance in a randomized trial, but it did not improve knowledge. That narrow but concrete finding pairs with a separate medical-affairs discussion pointing in the same direction: CME teams are being asked to justify design choices and grant strategy with more disciplined outcome logic.
A research podcast this week summarized a randomized trial of 406 health professionals—pharmacists, physicians, and advanced practice providers—comparing an enhanced 30-minute e-learning module with a more standard slides-and-voiceover version. The enhanced version used conversational language, visible authors, professional presence, and short expert videos. It produced a large gain in perceived personalization and embodiment, and a smaller gain in motivational features, but no statistically significant knowledge improvement: post-module knowledge differed by 0.04 points out of 10, with p=0.78, according to the Medical Education audio paper.
For CME providers, the budget lesson is blunt. If the goal is learner attention, relevance, or perceived connection, low-cost personalization may be worth testing. If the goal is knowledge transfer, the evidence does not support assuming that expert video clips or visible-author production will do the work by themselves.
The caveat matters: this is one RCT, delivered through one research podcast, in an antibiotic stewardship module. Still, the sample was multi-professional and the design question is portable. It also reinforces a pattern we noted in an earlier brief on root-cause needs assessments and format choices: format should follow the learning problem, not the other way around.
The implication for CME teams is to separate “makes the module feel better” from “changes what clinicians can apply.” A cheaper script rewrite may be enough for relevance; knowledge transfer may require retrieval practice, application, spacing, or follow-up measurement.
The second signal came from a medical-affairs expert interview on the evolving relationship between IME and medical affairs. The core point was not that CME should become promotional. It was that independent education is being judged in a broader strategic frame: where clinicians are in a scientific journey, what root causes explain the gap, and how accredited education can complement—not copy—the sponsor’s annual medical goals.
In the Write Medicine discussion, the expert described IME teams as increasingly connected to medical product teams and annual scientific goals, while preserving independence from product adoption tactics. For providers, that changes the proposal language. A needs assessment that only says “clinicians lack knowledge” is thinner than one that shows where learners are stuck: awareness, understanding, appropriate integration, or advocacy; what barrier is operating; and what outcome level will show movement.
This is a single expert perspective, not broad market proof. But it is consistent with the pressure CME teams already feel around credible outcomes reporting. We saw a related concern in an earlier brief on impact numbers supporters will actually believe: the measurement story has to be specific enough to survive scrutiny.
The question for CME providers is whether grant proposals still read like isolated educational activities. If they do, the work may be sound but under-translated. The stronger frame is: here is the clinician journey, here is the root cause, here is where education can independently move the learner, and here is how we will know.
The useful signal is not that personalization is bad or that IME must become more sponsor-facing. It is that CME teams need cleaner evidence chains. Spend where the outcome warrants it. Describe independent education in the language of learner movement, root cause, and measurable change. The providers that do both will have a stronger case with learners, faculty, and supporters.
Randomized trial of 406 health professionals demonstrated statistically significant motivation gains (Cohen d 0.85) but no knowledge improvement (p=0.78) from personalization tactics.
Open sourceExpert perspective that IME success requires root-cause analysis, Moore-framework outcomes, and explicit linkage to the sponsor's scientific journey rather than isolated grant-making.
Open sourceEarlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo