CME Planning Now Requires Defining the Destination Before Choosing the Route
Earlier coverage of learning design and its implications for CME providers.
This week’s signal: learner trust and planning rigor both point to the same constraint. Format choices matter less when inputs are weak.
Hematology/oncology fellows at ASCO 2024 ranked NCCN guidelines and reference websites as their most useful resources for both studying and clinical decision-making, while social media and online boards sat at the bottom. The sample is oncology-heavy, but the provider implication is broader: CME teams should treat trusted inputs as a design requirement, not an afterthought.
The clearest clinician-side signal this week came from ASCO discussion of a multicenter analysis of educational resources used by hematology/oncology fellows. One physician summary of the presentation noted that fellows use “reference sites and nccn most for studying and decision-making,” while social media and online boards were less helpful and question banks sat in the middle (source). A second ASCO post pointed to the same trainee-resource research as a medical education signal, even if the details available publicly were limited (source).
This matters because many education products still treat format as the innovation variable: shorter videos, social-style cards, posts, feeds, polls. Those can be useful. But if the learning environment does not make the trusted source obvious, it may feel fast rather than dependable.
For CME providers, the lesson is not “avoid social formats.” It is to stop confusing social packaging with learner trust. If an activity uses a feed, chat, short-form module, or conference-adjacent recap, the guideline, reference source, evidence basis, and update status need to be visible in the workflow. The question for product and editorial teams is simple: would a learner know, within seconds, why this item deserves clinical attention?
A separate provider-facing signal this week points in the same direction: better educational outputs depend on better planning inputs. The evidence here is different from the fellow-resource signal. It comes from CME and nursing continuing education podcasts, not independent clinician conversation, so it should be read as a planning-practice signal rather than broad clinician consensus.
In a Write Medicine episode, the discussion framed modern needs assessment work around explicit gap mapping: what is the gap, what is the root cause, what outcome is desired, and how the objective connects to that chain (source). A nursing accreditation-focused episode made the same operational point from another angle: planners should resist jumping straight to logistics and first ask what problem they are trying to solve, whether education can solve it, who actually needs to be involved, and how success will be evaluated (source).
That connects directly to the provider discipline we noted in an earlier brief on defining the destination before choosing the route. The current week adds a sharper operational implication: source trust and root-cause planning are not separate quality checks. They are both input controls.
For CME teams, a needs assessment that names a gap but does not explain why the gap exists is now a weak planning artifact. It may support an activity description, but it will struggle to support credible outcomes measurement, grant review, or instructional choices. The question is whether your templates force planners to document the cause-and-effect chain before anyone chooses the format, faculty, channel, or metric.
The useful signal is not that guidelines beat social media, or that root-cause analysis matters. CME teams already know both in theory. What changed is that the two signals landed together: learners are showing where they place trust, while planning experts are tightening expectations for how educational problems are defined.
That puts pressure on a common shortcut: making education look modern before proving that its inputs are credible. The better question for the next product review or proposal meeting is not “Is this engaging?” It is “Are the sources, causes, and outcomes strong enough for engagement to matter?”
Fellows report highest reliance on NCCN guidelines and reference sites for both exam prep and point-of-care decisions.
"So proud of @rmistry91 on his rapid oral abstract presentation! 👉fellows use reference sites and nccn most for studying and decision-making 👉social media and online boards less helpful 👉Q banks somewhere in the middle"
Show captured excerptCollapse excerptSocial media and online boards rank lowest; question banks sit in the middle.
Earlier coverage of learning design and its implications for CME providers.
Earlier coverage of outcomes planning and its implications for CME providers.
Earlier coverage of learning design and its implications for CME providers.
ChatCME surfaces the questions clinicians actually ask — so you can build activities that close real knowledge gaps.
Request a demo"Proud to support my friend and @VUMCHemOnc co-fellow @rmistry91 as he presented his important work on understanding the resources that trainees in heme/onc use for #meded. What a boss! #ASCO24 @vpatelmd @DanielHausrath @TheFellowOnCall"
Show captured excerptCollapse excerptEmphasizes need for explicit gap mapping, root-cause analysis, and outcome linkage in modern assessments.
Open sourceHighlights scope for further education on proper root-cause methods among planners.
Open source